query_id
stringlengths 32
32
| query
stringlengths 5
4.91k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
54739b925463523a5fa7e2294e6749a3
|
Ten years of a model of aesthetic appreciation and aesthetic judgments : The aesthetic episode - Developments and challenges in empirical aesthetics.
|
[
{
"docid": "78c3573511176ba63e2cf727e09c7eb4",
"text": "Human aesthetic preference in the visual domain is reviewed from definitional, methodological, empirical, and theoretical perspectives. Aesthetic science is distinguished from the perception of art and from philosophical treatments of aesthetics. The strengths and weaknesses of important behavioral techniques are presented and discussed, including two-alternative forced-choice, rank order, subjective rating, production/adjustment, indirect, and other tasks. Major findings are reviewed about preferences for colors (single colors, color combinations, and color harmony), spatial structure (low-level spatial properties, shape properties, and spatial composition within a frame), and individual differences in both color and spatial structure. Major theoretical accounts of aesthetic response are outlined and evaluated, including explanations in terms of mere exposure effects, arousal dynamics, categorical prototypes, ecological factors, perceptual and conceptual fluency, and the interaction of multiple components. The results of the review support the conclusion that aesthetic response can be studied rigorously and meaningfully within the framework of scientific psychology.",
"title": ""
}
] |
[
{
"docid": "396f6b6c09e88ca8e9e47022f1ae195b",
"text": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.",
"title": ""
},
{
"docid": "57fbb5bf0e7fe4b8be21fae87f027572",
"text": "Android and iOS devices are leading the mobile device market. While various user experiences have been reported from the general user community about their differences, such as battery lifetime, display, and touchpad control, few in-depth reports can be found about their comparative performance when receiving the increasingly popular Internet streaming services. Today, video traffic starts to dominate the Internet mobile data traffic. In this work, focusing on Internet streaming accesses, we set to analyze and compare the performance when Android and iOS devices are accessing Internet streaming services. Starting from the analysis of a server-side workload collected from a top mobile streaming service provider, we find Android and iOS use different approaches to request media content, leading to different amounts of received traffic on Android and iOS devices when a same video clip is accessed. Further studies on the client side show that different data requesting approaches (standard HTTP request vs. HTTP range request) and different buffer management methods (static vs. dynamic) are used in Android and iOS mediaplayers, and their interplay has led to our observations. Our empirical results and analysis provide some insights for the current Android and iOS users, streaming service providers, and mobile mediaplayer developers.",
"title": ""
},
{
"docid": "85f67ab0e1adad72bbe6417d67fd4c81",
"text": "Data warehouses are used to store large amounts of data. This data is often used for On-Line Analytical Processing (OLAP). Short response times are essential for on-line decision support. Common approaches to reach this goal in read-mostly environments are the precomputation of materialized views and the use of index structures. In this paper, a framework is presented to evaluate different index structures analytically depending on nine parameters for the use in a data warehouse environment. The framework is applied to four different index structures to evaluate which structure works best for range queries. We show that all parameters influence the performance. Additionally, we show why bitmap index structures use modern disks better than traditional tree structures and why bitmaps will supplant the tree based index structures in the future.",
"title": ""
},
{
"docid": "619c905f7ef5fa0314177b109e0ec0e6",
"text": "The aim of this review is to systematically summarise qualitative evidence about work-based learning in health care organisations as experienced by nursing staff. Work-based learning is understood as informal learning that occurs inside the work community in the interaction between employees. Studies for this review were searched for in the CINAHL, PubMed, Scopus and ABI Inform ProQuest databases for the period 2000-2015. Nine original studies met the inclusion criteria. After the critical appraisal by two researchers, all nine studies were selected for the review. The findings of the original studies were aggregated, and four statements were prepared, to be utilised in clinical work and decision-making. The statements concerned the following issues: (1) the culture of the work community; (2) the physical structures, spaces and duties of the work unit; (3) management; and (4) interpersonal relations. Understanding the nurses' experiences of work-based learning and factors behind these experiences provides an opportunity to influence the challenges of learning in the demanding context of health care organisations.",
"title": ""
},
{
"docid": "d135e72c317ea28a64a187b17541f773",
"text": "Automatic face recognition (AFR) is an area with immense practical potential which includes a wide range of commercial and law enforcement applications, and it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in AFR continues to improve, benefiting from advances in a range of different fields including image processing, pattern recognition, computer graphics and physiology. However, systems based on visible spectrum images continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease their accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject.",
"title": ""
},
{
"docid": "689f7aad97d36f71e43e843a331fcf5d",
"text": "Dimension-reducing feature extraction neural network techniques which also preserve neighbourhood relationships in data have traditionally been the exclusive domain of Kohonen self organising maps. Recently, we introduced a novel dimension-reducing feature extraction process, which is also topographic, based upon a Radial Basis Function architecture. It has been observed that the generalisation performance of the system is broadly insensitive to model order complexity and other smoothing factors such as the kernel widths, contrary to intuition derived from supervised neural network models. In this paper we provide an effective demonstration of this property and give a theoretical justification for the apparent 'self-regularising' behaviour of the 'NEUROSCALE' architecture. 1 'NeuroScale': A Feed-forward Neural Network Topographic Transformation Recently an important class of topographic neural network based feature extraction approaches, which can be related to the traditional statistical methods of Sammon Mappings (Sammon, 1969) and Multidimensional Scaling (Kruskal, 1964), have been introduced (Mao and Jain, 1995; Lowe, 1993; Webb, 1995; Lowe and Tipping, 1996). These novel alternatives to Kohonen-like approaches for topographic feature extraction possess several interesting properties. For instance, the NEuROSCALE architecture has the empirically observed property that the generalisation perfor544 D. Lowe and M. E. Tipping mance does not seem to depend critically on model order complexity, contrary to intuition based upon knowledge of its supervised counterparts. This paper presents evidence for their 'self-regularising' behaviour and provides an explanation in terms of the curvature of the trained models. We now provide a brief introduction to the NEUROSCALE philosophy of nonlinear topographic feature extraction. Further details may be found in (Lowe, 1993; Lowe and Tipping, 1996). We seek a dimension-reducing, topographic transformation of data for the purposes of visualisation and analysis. By 'topographic', we imply that the geometric structure of the data be optimally preserved in the transformation, and the embodiment of this constraint is that the inter-point distances in the feature space should correspond as closely as possible to those distances in the data space. The implementation of this principle by a neural network is very simple. A Radial Basis Function (RBF) neural network is utilised to predict the coordinates of the data point in the transformed feature space. The locations of the feature points are indirectly determined by adjusting the weights of the network. The transformation is determined by optimising the network parameters in order to minimise a suitable error measure that embodies the topographic principle. The specific details of this alternative approach are as follows. Given an mdimensional input space of N data points x q , an n-dimensional feature space of points Yq is generated such that the relative positions of the feature space points minimise the error, or 'STRESS', term: N E = 2: 2:(d~p dqp )2, (1) p q>p where the d~p are the inter-point Euclidean distances in the data space: d~p = J(xq Xp)T(Xq xp), and the dqp are the corresponding distances in the feature space: dqp = J(Yq Yp)T(Yq Yp)· The points yare generated by the RBF, given the data points as input. That is, Yq = f(xq;W), where f is the nonlinear transformation effected by the RBF with parameters (weights and any kernel smoothing factors) W. The distances in the feature space may thus be given by dqp =11 f(xq) f(xp) \" and so more explicitly by",
"title": ""
},
{
"docid": "5e240ad1d257a90c0ca414ce8e7e0949",
"text": "Improving Cloud Security using Secure Enclaves by Jethro Gideon Beekman Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California, Berkeley Professor David Wagner, Chair Internet services can provide a wealth of functionality, yet their usage raises privacy, security and integrity concerns for users. This is caused by a lack of guarantees about what is happening on the server side. As a worst case scenario, the service might be subjected to an insider attack. This dissertation describes the unalterable secure service concept for trustworthy cloud computing. Secure services are a powerful abstraction that enables viewing the cloud as a true extension of local computing resources. Secure services combine the security benefits one gets locally with the manageability and availability of the distributed cloud. Secure services are implemented using secure enclaves. Remote attestation of the server is used to obtain guarantees about the programming of the service. This dissertation addresses concerns related to using secure enclaves such as providing data freshness and distributing identity information. Certificate Transparency is augmented to distribute information about which services exist and what they do. All combined, this creates a platform that allows legacy clients to obtain security guarantees about Internet services.",
"title": ""
},
{
"docid": "4d040791f63af5e2ff13ff2b705dc376",
"text": "The frequency and severity of forest fires, coupled with changes in spatial and temporal precipitation and temperature patterns, are likely to severely affect the characteristics of forest and permafrost patterns in boreal eco-regions. Forest fires, however, are also an ecological factor in how forest ecosystems form and function, as they affect the rate and characteristics of tree recruitment. A better understanding of fire regimes and forest recovery patterns in different environmental and climatic conditions will improve the management of sustainable forests by facilitating the process of forest resilience. Remote sensing has been identified as an effective tool for preventing and monitoring forest fires, as well as being a potential tool for understanding how forest ecosystems respond to them. However, a number of challenges remain before remote sensing practitioners will be able to better understand the effects of forest fires and how vegetation responds afterward. This article attempts to provide a comprehensive review of current research with respect to remotely sensed data and methods used to model post-fire effects and forest recovery patterns in boreal forest regions. The review reveals that remote sensing-based monitoring of post-fire effects and forest recovery patterns in boreal forest regions is not only limited by the gaps in both field data and remotely sensed data, but also the complexity of far-northern fire regimes, climatic conditions and environmental conditions. We expect that the integration of different remotely sensed data coupled with field campaigns can provide an important data source to support the monitoring of post-fire effects and forest recovery patterns. Additionally, the variation and stratification of preand post-fire vegetation and environmental conditions should be considered to achieve a reasonable, operational model for monitoring post-fire effects and forest patterns in boreal regions. OPEN ACCESS Remote Sens. 2014, 6 471",
"title": ""
},
{
"docid": "807e008d5c7339706f8cfe71e9ced7ba",
"text": "Current competitive challenges induced by globalization and advances in information technology have forced companies to focus on managing customer relationships, and in particular customer satisfaction, in order to efficiently maximize revenues. This paper reports exploratory research based on a mail survey addressed to the largest 1,000 Greek organizations. The objectives of the research were: to investigate the extent of the usage of customerand market-related knowledge management (KM) instruments and customer relationship management (CRM) systems by Greek organizations and their relationship with demographic and organizational variables; to investigate whether enterprises systematically carry out customer satisfaction and complaining behavior research; and to examine the impact of the type of the information system used and managers’ attitudes towards customer KM practices. In addition, a conceptual model of CRM development stages is proposed. The findings of the survey show that about half of the organizations of the sample do not adopt any CRM philosophy. The remaining organizations employ instruments to conduct customer satisfaction and other customer-related research. However, according to the proposed model, they are positioned in the first, the preliminary CRM development stage. The findings also suggest that managers hold positive attitudes towards CRM and that there is no significant relationship between the type of the transactional information system used and the extent to which customer satisfaction research is performed by the organizations. The paper concludes by discussing the survey findings and proposing future",
"title": ""
},
{
"docid": "4ed74450320dfef4156013292c1d2cbb",
"text": "This paper describes the decisions by which teh Association for Computing Machinery integrated good features from the Los Alamos e-print (physics) archive and from Cornell University's Networked Computer Science Technical Reference Library to form their own open, permanent, online “computing research repository” (CoRR). Submitted papers are not refereed and anyone can browse and extract CoRR material for free, so Corr's eventual success could revolutionize computer science publishing. But several serious challenges remain: some journals forbid online preprints, teh CoRR user interface is cumbersome, submissions are only self-indexed, (no professional library staff manages teh archive) and long-term funding is uncertain.",
"title": ""
},
{
"docid": "0105070bd23400083850627b1603af0b",
"text": "This research covers an endeavor by the author on the usage of automated vision and navigation framework; the research is conducted by utilizing a Kinect sensor requiring minimal effort framework for exploration purposes in the zone of robot route. For this framework, GMapping (a highly efficient Rao-Blackwellized particle filer to learn grid maps from laser range data) parameters have been optimized to improve the accuracy of the map generation and the laser scan. With the use of Robot Operating System (ROS), the open source GMapping bundle was utilized as a premise for a map era and Simultaneous Localization and Mapping (SLAM). Out of the many different map generation techniques, the tele-operation used is interactive marker, which controls the TurtleBot 2 movements via RVIZ (3D visualization tool for ROS). Test results completed with the multipurpose robot in a counterfeit and regular environment represents the preferences of the proposed strategy. From experiments, it is found that Kinect sensor produces a more accurate map compared to non-filtered laser range finder data, which is excellent since the price of a Kinect sensor is much cheaper than a laser range finder. An expansion of experimental results was likewise done to test the performance of the portable robot frontier exploring in an obscure environment while performing SLAM alongside the proposed technique.",
"title": ""
},
{
"docid": "e3299737a0fb3cd3c9433f462565b278",
"text": "BACKGROUND\nMore than two-thirds of pregnant women experience low-back pain and almost one-fifth experience pelvic pain. The two conditions may occur separately or together (low-back and pelvic pain) and typically increase with advancing pregnancy, interfering with work, daily activities and sleep.\n\n\nOBJECTIVES\nTo update the evidence assessing the effects of any intervention used to prevent and treat low-back pain, pelvic pain or both during pregnancy.\n\n\nSEARCH METHODS\nWe searched the Cochrane Pregnancy and Childbirth (to 19 January 2015), and the Cochrane Back Review Groups' (to 19 January 2015) Trials Registers, identified relevant studies and reviews and checked their reference lists.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of any treatment, or combination of treatments, to prevent or reduce the incidence or severity of low-back pain, pelvic pain or both, related functional disability, sick leave and adverse effects during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy.\n\n\nMAIN RESULTS\nWe included 34 RCTs examining 5121 pregnant women, aged 16 to 45 years and, when reported, from 12 to 38 weeks' gestation. Fifteen RCTs examined women with low-back pain (participants = 1847); six examined pelvic pain (participants = 889); and 13 examined women with both low-back and pelvic pain (participants = 2385). Two studies also investigated low-back pain prevention and four, low-back and pelvic pain prevention. Diagnoses ranged from self-reported symptoms to clinicians' interpretation of specific tests. All interventions were added to usual prenatal care and, unless noted, were compared with usual prenatal care. The quality of the evidence ranged from moderate to low, raising concerns about the confidence we could put in the estimates of effect. For low-back painResults from meta-analyses provided low-quality evidence (study design limitations, inconsistency) that any land-based exercise significantly reduced pain (standardised mean difference (SMD) -0.64; 95% confidence interval (CI) -1.03 to -0.25; participants = 645; studies = seven) and functional disability (SMD -0.56; 95% CI -0.89 to -0.23; participants = 146; studies = two). Low-quality evidence (study design limitations, imprecision) also suggested no significant differences in the number of women reporting low-back pain between group exercise, added to information about managing pain, versus usual prenatal care (risk ratio (RR) 0.97; 95% CI 0.80 to 1.17; participants = 374; studies = two). For pelvic painResults from a meta-analysis provided low-quality evidence (study design limitations, imprecision) of no significant difference in the number of women reporting pelvic pain between group exercise, added to information about managing pain, and usual prenatal care (RR 0.97; 95% CI 0.77 to 1.23; participants = 374; studies = two). For low-back and pelvic painResults from meta-analyses provided moderate-quality evidence (study design limitations) that: an eight- to 12-week exercise program reduced the number of women who reported low-back and pelvic pain (RR 0.66; 95% CI 0.45 to 0.97; participants = 1176; studies = four); land-based exercise, in a variety of formats, significantly reduced low-back and pelvic pain-related sick leave (RR 0.76; 95% CI 0.62 to 0.94; participants = 1062; studies = two).The results from a number of individual studies, incorporating various other interventions, could not be pooled due to clinical heterogeneity. There was moderate-quality evidence (study design limitations or imprecision) from individual studies suggesting that osteomanipulative therapy significantly reduced low-back pain and functional disability, and acupuncture or craniosacral therapy improved pelvic pain more than usual prenatal care. Evidence from individual studies was largely of low quality (study design limitations, imprecision), and suggested that pain and functional disability, but not sick leave, were significantly reduced following a multi-modal intervention (manual therapy, exercise and education) for low-back and pelvic pain.When reported, adverse effects were minor and transient.\n\n\nAUTHORS' CONCLUSIONS\nThere is low-quality evidence that exercise (any exercise on land or in water), may reduce pregnancy-related low-back pain and moderate- to low-quality evidence suggesting that any exercise improves functional disability and reduces sick leave more than usual prenatal care. Evidence from single studies suggests that acupuncture or craniosacral therapy improves pregnancy-related pelvic pain, and osteomanipulative therapy or a multi-modal intervention (manual therapy, exercise and education) may also be of benefit.Clinical heterogeneity precluded pooling of results in many cases. Statistical heterogeneity was substantial in all but three meta-analyses, which did not improve following sensitivity analyses. Publication bias and selective reporting cannot be ruled out.Further evidence is very likely to have an important impact on our confidence in the estimates of effect and change the estimates. Studies would benefit from the introduction of an agreed classification system that can be used to categorise women according to their presenting symptoms, so that treatment can be tailored accordingly.",
"title": ""
},
{
"docid": "c87cc578b4a74bae4ea1e0d0d68a6038",
"text": "Human-Computer Interaction (HCI) exists ubiquitously in our daily lives. It is usually achieved by using a physical controller such as a mouse, keyboard or touch screen. It hinders Natural User Interface (NUI) as there is a strong barrier between the user and computer. There are various hand tracking systems available on the market, but they are complex and expensive. In this paper, we present the design and development of a robust marker-less hand/finger tracking and gesture recognition system using low-cost hardware. We propose a simple but efficient method that allows robust and fast hand tracking despite complex background and motion blur. Our system is able to translate the detected hands or gestures into different functional inputs and interfaces with other applications via several methods. It enables intuitive HCI and interactive motion gaming. We also developed sample applications that can utilize the inputs from the hand tracking system. Our results show that an intuitive HCI and motion gaming system can be achieved with minimum hardware requirements.",
"title": ""
},
{
"docid": "505aff71acf5469dc718b8168de3e311",
"text": "We propose two suffix array inspired full-text indexes. One, called SAhash, augments the suffix array with a hash table to speed up pattern searches due to significantly narrowed search interval before the binary search phase. The other, called FBCSA, is a compact data structure, similar to Mäkinen’s compact suffix array, but working on fixed sized blocks. Experiments on the Pizza & Chili 200MB datasets show that SA-hash is about 2–3 times faster in pattern searches (counts) than the standard suffix array, for the price of requiring 0.2n− 1.1n bytes of extra space, where n is the text length, and setting a minimum pattern length. FBCSA is relatively fast in single cell accesses (a few times faster than related indexes at about the same or better compression), but not competitive if many consecutive cells are to be extracted. Still, for the task of extracting, e.g., 10 successive cells its time-space relation remains attractive.",
"title": ""
},
{
"docid": "efd2843175ad0b860ad1607f337addc5",
"text": "We demonstrate the usefulness of the uniform resource locator (URL) alone in performing web page classification. This approach is faster than typical web page classification, as the pages do not have to be fetched and analyzed. Our approach segments the URL into meaningful chunks and adds component, sequential and orthographic features to model salient patterns. The resulting features are used in supervised maximum entropy modeling. We analyze our approach's effectiveness on two standardized domains. Our results show that in certain scenarios, URL-based methods approach the performance of current state-of-the-art full-text and link-based methods.",
"title": ""
},
{
"docid": "ab15d55e8308843c526aed0c32db1cb2",
"text": "ix Chapter 1: Introduction 1 1.1 Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Human-Robot Communication . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Life-Long Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Chapter 2: Background and Related Work 11 2.1 Manual Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Task-Level Robot Control . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Learning from Demonstration . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Demonstration Approaches . . . . . . . . . . . . . . . . . . . . . 14 2.3.2 Policy Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 Life-Long Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 3: Learning from Demonstration 19 3.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Role of the Instructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Role of the Student . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4 Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.4.1 Human-Robot Communication . . . . . . . . . . . . . . . . . . . 24 3.4.2 System Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.5 Learning a Task Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30",
"title": ""
},
{
"docid": "e5eb79b313dad91de1144cd0098cde15",
"text": "Information Extraction aims to retrieve certain types of information from natural language text by processing them automatically. For example, an information extraction system might retrieve information about geopolitical indicators of countries from a set of web pages while ignoring other types of information. Ontology-based information extraction has recently emerged as a subfield of information extraction. Here, ontologies which provide formal and explicit specifications of conceptualizations play a crucial role in the information extraction process. Because of the use of ontologies, this field is related to knowledge representation and has the potential to assist the development of the Semantic Web. In this paper, we provide an introduction to ontology-based information extraction and review the details of different ontology-based information extraction systems developed so far. We attempt to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation. We also discuss the implementation details of these systems including the tools used by them and the metrics used to measure their performance. In addition, we attempt to identify the possible future directions for this field.",
"title": ""
},
{
"docid": "f18833c40f6b15bb588eec3bbe52cdd4",
"text": "Presented here is a cladistic analysis of the South American and some North American Camelidae. This analysis shows that Camelini and Lamini are monophyletic groups, as are the genera Palaeolama and Vicugna, while Hemiauchenia and Lama are paraphyletic. Some aspects of the migration and distribution of South American camelids are also discussed, confirming in part the propositions of other authors. According to the cladistic analysis and previous propositions, it is possible to infer that two Camelidae migration events occurred in America. In the first one, Hemiauchenia arrived in South America and, this was related to the speciation processes that originated Lama and Vicugna. In the second event, Palaeolama migrated from North America to the northern portion of South America. It is evident that there is a need for larger studies about fossil Camelidae, mainly regarding older ages and from the South American austral region. This is important to better undertand the geographic and temporal distribution of Camelidae and, thus, the biogeographic aspects after the Great American Biotic Interchange.",
"title": ""
},
{
"docid": "de061c5692bf11876c03b9b5e7c944a0",
"text": "The purpose of this article is to summarize several change theories and assumptions about the nature of change. The author shows how successful change can be encouraged and facilitated for long-term success. The article compares the characteristics of Lewin’s Three-Step Change Theory, Lippitt’s Phases of Change Theory, Prochaska and DiClemente’s Change Theory, Social Cognitive Theory, and the Theory of Reasoned Action and Planned Behavior to one another. Leading industry experts will need to continually review and provide new information relative to the change process and to our evolving society and culture. here are many change theories and some of the most widely recognized are briefly summarized in this article. The theories serve as a testimony to the fact that change is a real phenomenon. It can be observed and analyzed through various steps or phases. The theories have been conceptualized to answer the question, “How does successful change happen?” Lewin’s Three-Step Change Theory Kurt Lewin (1951) introduced the three-step change model. This social scientist views behavior as a dynamic balance of forces working in opposing directions. Driving forces facilitate change because they push employees in the desired direction. Restraining forces hinder change because they push employees in the opposite direction. Therefore, these forces must be analyzed and Lewin’s three-step model can help shift the balance in the direction of the planned change (http://www.csupomona.edu/~jvgrizzell/best_practices/bctheory.html). T INTERNATIONAL JOURNAL OF MNAGEMENT, BUSINESS, AND ADMINISTRATION 2_____________________________________________________________________________________ According to Lewin, the first step in the process of changing behavior is to unfreeze the existing situation or status quo. The status quo is considered the equilibrium state. Unfreezing is necessary to overcome the strains of individual resistance and group conformity. Unfreezing can be achieved by the use of three methods. First, increase the driving forces that direct behavior away from the existing situation or status quo. Second, decrease the restraining forces that negatively affect the movement from the existing equilibrium. Third, find a combination of the two methods listed above. Some activities that can assist in the unfreezing step include: motivate participants by preparing them for change, build trust and recognition for the need to change, and actively participate in recognizing problems and brainstorming solutions within a group (Robbins 564-65). Lewin’s second step in the process of changing behavior is movement. In this step, it is necessary to move the target system to a new level of equilibrium. Three actions that can assist in the movement step include: persuading employees to agree that the status quo is not beneficial to them and encouraging them to view the problem from a fresh perspective, work together on a quest for new, relevant information, and connect the views of the group to well-respected, powerful leaders that also support the change (http://www.csupomona.edu/~jvgrizzell/best_practices/bctheory.html). The third step of Lewin’s three-step change model is refreezing. This step needs to take place after the change has been implemented in order for it to be sustained or “stick” over time. It is high likely that the change will be short lived and the employees will revert to their old equilibrium (behaviors) if this step is not taken. It is the actual integration of the new values into the community values and traditions. The purpose of refreezing is to stabilize the new equilibrium resulting from the change by balancing both the driving and restraining forces. One action that can be used to implement Lewin’s third step is to reinforce new patterns and institutionalize them through formal and informal mechanisms including policies and procedures (Robbins 564-65). Therefore, Lewin’s model illustrates the effects of forces that either promote or inhibit change. Specifically, driving forces promote change while restraining forces oppose change. Hence, change will occur when the combined strength of one force is greater than the combined strength of the opposing set of forces (Robbins 564-65). Lippitt’s Phases of Change Theory Lippitt, Watson, and Westley (1958) extend Lewin’s Three-Step Change Theory. Lippitt, Watson, and Westley created a seven-step theory that focuses more on the role and responsibility of the change agent than on the evolution of the change itself. Information is continuously exchanged throughout the process. The seven steps are:",
"title": ""
},
{
"docid": "bda04f2eaee74979d7684681041e19bd",
"text": "In March of 2016, Google DeepMind's AlphaGo, a computer Go-playing program, defeated the reigning human world champion Go player, 4-1, a feat far more impressive than previous victories by computer programs in chess (IBM's Deep Blue) and Jeopardy (IBM's Watson). The main engine behind the program combines machine learning approaches with a technique called Monte Carlo tree search. Current versions of Monte Carlo tree search used in Go-playing algorithms are based on a version developed for games that traces its roots back to the adaptive multi-stage sampling simulation optimization algorithm for estimating value functions in finite-horizon Markov decision processes (MDPs) introduced by Chang et al. (2005), which was the first use of Upper Confidence Bounds (UCBs) for Monte Carlo simulation-based solution of MDPs. We review the main ideas in UCB-based Monte Carlo tree search by connecting it to simulation optimization through the use of two simple examples: decision trees and tic-tac-toe.",
"title": ""
}
] |
scidocsrr
|
bb63a77d820fe36177f1cc09ca0e9074
|
High Step-Up Active-Clamp Converter With Input-Current Doubler and Output-Voltage Doubler for Fuel Cell Power Systems
|
[
{
"docid": "819f6b62eb3f8f9d60437af28c657935",
"text": "The global electrical energy consumption is rising and there is a steady increase of the demand on the power capacity, efficient production, distribution and utilization of energy. The traditional power systems are changing globally, a large number of dispersed generation (DG) units, including both renewable and nonrenewable energy sources such as wind turbines, photovoltaic (PV) generators, fuel cells, small hydro, wave generators, and gas/steam powered combined heat and power stations, are being integrated into power systems at the distribution level. Power electronics, the technology of efficiently processing electric power, play an essential part in the integration of the dispersed generation units for good efficiency and high performance of the power systems. This paper reviews the applications of power electronics in the integration of DG units, in particular, wind power, fuel cells and PV generators.",
"title": ""
},
{
"docid": "0e5eee72224a306f7f68fe1e9ea730e6",
"text": "The implementation of a hybrid fuel cell/battery system is proposed to improve the slow transient response of a fuel cell stack. This system can be used for an autonomous device with quick load variations. A suitable three-port, galvanic isolated, bidirectional power converter is proposed to control the power flow. An energy management method for the proposed three-port circuit is analyzed and implemented. Measurements from a 500-W laboratory prototype are presented to demonstrate the validity of the approach",
"title": ""
},
{
"docid": "149d9a316e4c5df0c9300d26da685bc6",
"text": "Multiport dc-dc converters are particularly interesting for sustainable energy generation systems where diverse sources and storage elements are to be integrated. This paper presents a zero-voltage switching (ZVS) three-port bidirectional dc-dc converter. A simple and effective duty ratio control method is proposed to extend the ZVS operating range when input voltages vary widely. Soft-switching conditions over the full operating range are achievable by adjusting the duty ratio of the voltage applied to the transformer winding in response to the dc voltage variations at the port. Keeping the volt-second product (half-cycle voltage-time integral) equal for all the windings leads to ZVS conditions over the entire operating range. A detailed analysis is provided for both the two-port and the three-port converters. Furthermore, for the three-port converter a dual-PI-loop based control strategy is proposed to achieve constant output voltage, power flow management, and soft-switching. The three-port converter is implemented and tested for a fuel cell and supercapacitor system.",
"title": ""
}
] |
[
{
"docid": "9991f83811ea41a35f558cb724577ae6",
"text": "Virtual machine (VM) placement is the process of selecting the most suitable server in large cloud data centers to deploy newly-created VMs. Several approaches have been proposed to find a solution to this problem. However, most of the existing solutions only consider a limited number of resource types, thus resulting in unbalanced load or in the unnecessary activation of physical servers. In this article, we propose an algorithm, called Max-BRU, that maximizes the resource utilization and balances the usage of resources across multiple dimensions. Our algorithm is based on multiple resource-constraint metrics that help to find the most suitable server for deploying VMs in large cloud data centers. The proposed Max-BRU algorithm is evaluated by simulations based on synthetic datasets. Experimental results show two major improvements over the existing approaches for VM placement. First, Max-BRU increases the resource utilization by minimizing the amount of physical servers used. Second, Max-BRU effectively balances the utilization of multiple types of resources.",
"title": ""
},
{
"docid": "d405fc2bcbdc8f65584b7977b2442d56",
"text": "Financial Industry Studies is published by the Federal Reserve Bank of Dallas. The views expressed are those of the authors and should not be attributed to the Federal Reserve Bank of Dallas or the Federal Reserve System. Articles may be reprinted on the condition that the source is credited and a copy of the publication containing the reprinted article is provided to the Financial Industry Studies Department of the Federal Reserve Bank of Dallas.",
"title": ""
},
{
"docid": "133b2f033245dad2a2f35ff621741b2f",
"text": "In wireless sensor networks (WSNs), long lifetime requirement of different applications and limited energy storage capability of sensor nodes has led us to find out new horizons for reducing power consumption upon nodes. To increase sensor node's lifetime, circuit and protocols have to be energy efficient so that they can make a priori reactions by estimating and predicting energy consumption. The goal of this study is to present and discuss several strategies such as power-aware protocols, cross-layer optimization, and harvesting technologies used to alleviate power consumption constraint in WSNs.",
"title": ""
},
{
"docid": "8e06dbf42df12a34952cdd365b7f328b",
"text": "Data and theory from prism adaptation are reviewed for the purpose of identifying control methods in applications of the procedure. Prism exposure evokes three kinds of adaptive or compensatory processes: postural adjustments (visual capture and muscle potentiation), strategic control (including recalibration of target position), and spatial realignment of various sensory-motor reference frames. Muscle potentiation, recalibration, and realignment can all produce prism exposure aftereffects and can all contribute to adaptive performance during prism exposure. Control over these adaptive responses can be achieved by manipulating the locus of asymmetric exercise during exposure (muscle potentiation), the similarity between exposure and post-exposure tasks (calibration), and the timing of visual feedback availability during exposure (realignment).",
"title": ""
},
{
"docid": "cdc16633ea2ed3f8e9c417aff5577def",
"text": "Online social shopping communities are transforming the way customers communicate and exchange product information with others. To date, the issue of customer participation in online social shopping communities has become an important but underexplored research area in the academic literature. In this study, we examined how online social interactions affect customer information contribution behavior. We also explored the moderating role of customer reputation in the relationship between observational learning and reinforcement learning as well as customer information contribution behavior. Analyses of panel data from 6,121 customers in an online social fashion platform revealed that they are significant factors affecting customer information contribution behavior and that reinforcement learning exhibits a stronger effect than observational learning. The results also showed that customer reputation has a significant negative moderating effect on the relationship between observational learning and customer information contribution behavior. This study not only enriched our theoretical understanding of information contribution behavior but also provided guidelines for online social shopping community administrators to better design their community features. Introduction",
"title": ""
},
{
"docid": "629f6ab006700e5bc6b5a001a4d925e5",
"text": "Model predictive control (MPC) is an effective method for controlling robotic systems, particularly autonomous aerial vehicles such as quadcopters. However, application of MPC can be computationally demanding, and typically requires estimating the state of the system, which can be challenging in complex, unstructured environments. Reinforcement learning can in principle forego the need for explicit state estimation and acquire a policy that directly maps sensor readings to actions, but is difficult to apply to unstable systems that are liable to fail catastrophically during training before an effective policy has been found. We propose to combine MPC with reinforcement learning in the framework of guided policy search, where MPC is used to generate data at training time, under full state observations provided by an instrumented training environment. This data is used to train a deep neural network policy, which is allowed to access only the raw observations from the vehicle's onboard sensors. After training, the neural network policy can successfully control the robot without knowledge of the full state, and at a fraction of the computational cost of MPC. We evaluate our method by learning obstacle avoidance policies for a simulated quadrotor, using simulated onboard sensors and no explicit state estimation at test time.",
"title": ""
},
{
"docid": "1a38a85a026717f45c707f61aceeff38",
"text": "Two major trends in computing systems are the growth in high performance computing (HPC) with in particular an international exascale initiative, and big data with an accompanying cloud infrastructure of dramatic and increasing size and sophistication. In this paper, we study an approach to convergence for software and applications/algorithms and show what hardware architectures it suggests. We start by dividing applications into data plus model components and classifying each component (whether from Big Data or Big Compute) in the same way. This leads to 64 properties divided into 4 views, which are Problem Architecture (Macro pattern); Execution Features (Micro patterns); Data Source and Style; and finally the Processing (runtime) View. We discuss convergence software built around HPC-ABDS (High Performance Computing enhanced Apache Big Data Stack) and show how one can merge Big Data and HPC (Big Simulation) concepts into a single stack and discuss appropriate hardware.",
"title": ""
},
{
"docid": "f25eff52ed3c862f4e18cbcd4a1f1c5b",
"text": "BACKGROUND\nAdolescent girls dwelling in slums are vulnerable to poor reproductive health due to lack of awareness about reproductive health and low life skills. These girls are in a crucial stage of their life cycle and their health can impact the health of future generations. Despite adolescents comprising almost one-quarter of the Indian population they are ill served in terms of reproductive health.\n\n\nMETHODS\nThis cross-sectional study was done among 130 slum-dwelling adolescent girls, aged 13-19 years, using multistage sampling method from five slums in Chennai, southern India. The reproductive and menstrual morbidity profile, personal and environmental menstrual hygiene was assessed to determine their reproductive health-seeking behaviour and life skills.\n\n\nRESULTS\nNinety-five (73%) girls (95% CI 66.23-81.36) reported menstrual morbidity and 66 (51%; 95% CI 50.74-52.25) had symptoms suggestive of reproductive/urinary tract infection. Of the girls surveyed, 55 (42%) were married. Nearly 25% (95% CI 23.07-26.92) of the married girls had a history of abortion and 18% (95% CI 11.32-25.07) had self-treated with medications for the same. Contraceptive use among ever-married girls was 22.7% (95% CI 20.83-24.56). Even though 75% of respondents knew about HIV/AIDS, their knowledge of modes of transmission and prevention were low (39% and 19%, respectively). Almost 39% of respondents felt shame or insecurity as the key barrier for not seeking reproductive healthcare. About 52% had low life skill levels. On logistic regression, menstrual morbidity was high among those with low life skills, symptoms suggestive of reproductive/urinary tract infection were high among those who were married before 14 years of age and life skills were high among those who belonged to the scheduled caste community.\n\n\nCONCLUSION\nThere is a high prevalence of menstrual/reproductive morbidity, self-treated abortion and low knowledge about modes of HIV transmission/prevention and use of contraceptives among adolescent girls in slums in Chennai. There is a need to initiate community-level life skill education, sex education and behaviour change communication.",
"title": ""
},
{
"docid": "d49524f543a1749e71bebf4804cb20c8",
"text": "We propose MVCNN, a convolution neural network (CNN) architecture for sentence classification. It (i) combines diverse versions of pretrained word embeddings and (ii) extracts features of multigranular phrases with variable-size convolution filters. We also show that pretraining MVCNN is critical for good performance. MVCNN achieves state-of-the-art performance on four tasks: on small-scale binary, small-scale multi-class and largescale Twitter sentiment prediction and on subjectivity classification.",
"title": ""
},
{
"docid": "110742230132649f178d2fa99c8ffade",
"text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.",
"title": ""
},
{
"docid": "28bb2aa8a05e90072e2dc4a3b5d871d5",
"text": "Radio Frequency Identification (RFID) security has not been properly handled in numerous applications, such as in public transportation systems. In this paper, a methodology to reverse engineer and detect security flaws is put into practice. Specifically, the communications protocol of an ISO/IEC 14443-B public transportation card used by hundreds of thousands of people in Spain was analyzed. By applying the methodology with a hardware tool (Proxmark 3), it was possible to access private information (e.g. trips performed, buses taken, fares applied…), to capture tag-reader communications, and even emulate both tags and readers.",
"title": ""
},
{
"docid": "8d23e6697a27db666b57eb5646017128",
"text": "To retrospectively analyse the intermediate-term outcome of holmium laser ablation of the prostate (HoLAP) of up to 4 years postoperatively in one of the largest series and to define the selection criteria for patients who benefit from potentially lower complications associated with HoLAP. Between June 2006 and November 2010, 144 patients with benign prostatic obstruction were treated at two centres with standardised HoLAP (2.0 J/50 Hz or 3.2 J/25 Hz with Versapulse® 80–100 W laser Lumenis®). Median follow-up was 21 months (range, 1–54). International prostate symptom score and quality of life (IPSS-QoL), PSA, prostate volume, maximal flow rate (Qmax), postvoiding residual volume (Vres) were evaluated pre- and postoperatively. All complications were graded according to CTCAE (v4.03). Mean patient age was 70.1 ± 7.7 years (range, 46–90). With a preoperative median prostate volume of 40 ml (range, 10–130), the median operation time was 50 min (range, 9–138). We observed a median catheterisation time of 1 day (range, 0–12) and hospitalisation time of 2 days (range, 1–16). IPSS-QoL, Qmax and Vres were significantly improved after 3 months, and all parameters remained unchanged after 12, 24 and 36 months. The rate of re-operation was significantly lower in patients with prostate volume <40 ml, compared to patients with prostates ≥40 ml (9.1 vs. 25 %, p = 0.04). HoLAP is a safe and effective procedure for the treatment of prostates <40 ml. Patients benefit from HoLAP because of a low bleeding rate and short hospital stay. Due to high recurrence rates, HoLAP should be avoided in prostates >40 ml.",
"title": ""
},
{
"docid": "5a91b2d8611b14e33c01390181eb1891",
"text": "Rapidly expanding volume of publications in the biomedical domain makes it increasingly difficult for a timely evaluation of the latest literature. That, along with a push for automated evaluation of clinical reports, present opportunities for effective natural language processing methods. In this study we target the problem of named entity recognition, where texts are processed to annotate terms that are relevant for biomedical studies. Terms of interest in the domain include gene and protein names, and cell lines and types. Here we report on a pipeline built on Embeddings from Language Models (ELMo) and a deep learning package for natural language processing (AllenNLP). We trained context-aware token embeddings on a dataset of biomedical papers using ELMo, and incorporated these embeddings in the LSTM-CRF model used by AllenNLP for named entity recognition. We show these representations improve named entity recognition for different types of biomedical named entities. We also achieve a new state of the art in gene mention detection on the BioCreative II gene mention shared task.",
"title": ""
},
{
"docid": "ecbd9201a7f8094a02fcec2c4f78240d",
"text": "Neural network compression has recently received much attention due to the computational requirements of modern deep models. In this work, our objective is to transfer knowledge from a deep and accurate model to a smaller one. Our contributions are threefold: (i) we propose an adversarial network compression approach to train the small student network to mimic the large teacher, without the need for labels during training; (ii) we introduce a regularization scheme to prevent a trivially-strong discriminator without reducing the network capacity and (iii) our approach generalizes on different teacher-student models. In an extensive evaluation on five standard datasets, we show that our student has small accuracy drop, achieves better performance than other knowledge transfer approaches and it surpasses the performance of the same network trained with labels. In addition, we demonstrate state-ofthe-art results compared to other compression strategies.",
"title": ""
},
{
"docid": "308da592c92c2343ffdc460786cc46c9",
"text": "Electroluminescence (EL) imaging is a useful modality for the inspection of photovoltaic (PV) modules. EL images provide high spatial resolution, which makes it possible to detect even finest defects on the surface of PV modules. However, the analysis of EL images is typically a manual process that is expensive, time-consuming, and requires expert knowledge of many different types of defects. In this work, we investigate two approaches for automatic detection of such defects in a single image of a PV cell. The approaches differ in their hardware requirements, which are dictated by their respective application scenarios. The more hardware-efficient approach is based on hand-crafted features that are classified in a Support Vector Machine (SVM). To obtain a strong performance, we investigate and compare various processing variants. The more hardware-demanding approach uses an end-to-end deep Convolutional Neural Network (CNN) that runs on a Graphics Processing Unit (GPU). Both approaches are trained on 1,968 cells extracted from high resolution EL intensity images of monoand polycrystalline PV modules. The CNN is more accurate, and reaches an average accuracy of 88.42%. The SVM achieves a slightly lower average accuracy of 82.44%, but can run on arbitrary hardware. Both automated approaches make continuous, highly accurate monitoring of PV cells feasible.",
"title": ""
},
{
"docid": "7100b0adb93419a50bbaeb1b7e32edf5",
"text": "Fractals have been very successful in quantifying the visual complexity exhibited by many natural patterns, and have captured the imagination of scientists and artists alike. Our research has shown that the poured patterns of the American abstract painter Jackson Pollock are also fractal. This discovery raises an intriguing possibility - are the visual characteristics of fractals responsible for the long-term appeal of Pollock's work? To address this question, we have conducted 10 years of scientific investigation of human response to fractals and here we present, for the first time, a review of this research that examines the inter-relationship between the various results. The investigations include eye tracking, visual preference, skin conductance, and EEG measurement techniques. We discuss the artistic implications of the positive perceptual and physiological responses to fractal patterns.",
"title": ""
},
{
"docid": "74fcade8e5f5f93f3ffa27c4d9130b9f",
"text": "Resampling is an important signature of manipulated images. In this paper, we propose two methods to detect and localize image manipulations based on a combination of resampling features and deep learning. In the first method, the Radon transform of resampling features are computed on overlapping image patches. Deep learning classifiers and a Gaussian conditional random field model are then used to create a heatmap. Tampered regions are located using a Random Walker segmentation method. In the second method, resampling features computed on overlapping image patches are passed through a Long short-term memory (LSTM) based network for classification and localization. We compare the performance of detection/localization of both these methods. Our experimental results show that both techniques are effective in detecting and localizing digital image forgeries.",
"title": ""
},
{
"docid": "090a0b0855fc7c5e98b4caf10b1dd02c",
"text": "Veitch and Newsham proposed a behaviorally-based model for lighting quality research, in which individually-based processes mediate the relationships between luminous conditions and such behavioral outcomes as task performance, mood, social behavior, aesthetic judgements and satisfaction. This review paper summarizes the state of knowledge concerning mediating psychological processes: perceived control, attention, environmental appraisal, and affect. These processes were selected because of their relevance to the explanations often given for lighting design choices. More explicit use of theoretically-driven predictions to guide lighting research would result in greater precision in our comprehension of lighting-behavior relationships to form the foundation of empirically-based lighting recommended practice. Psychology and Lighting Quality / 2 Introduction Surveys of office employees consistently report that lighting is among the more important features of office design and furnishings. Likewise, in the professional community of lighting designers and illuminating engineers, there is a long history of speculation that the quality of the luminous environment can influence task performance, comfort, and well-being, effects that are fundamentally psychological -that is, behavioral, in nature. This paper reviews the relevant literature concerning such effects, in an attempt to provide direction for lighting recommended practice and to further lighting quality research. Stein, Reynolds, and McGuinness once defined lighting quality as \"a term used to describe all of the factors in a lighting installation not directly connected with quantity of illumination. p.887) Their definition, although flexible enough to be applied to a wide variety of lit environments, offers little guidance concerning how to measure the quality of a lit environment. Veitch and Newsham proposed that lighting quality exists when the luminous conditions support the behavioral needs of individuals in the lit space. This definition has the merit of being measurable, but it only considers the immediate consequences of the luminous conditions on the individuals. More accurately, this definition should be expanded to include architectural and economic considerations, as well as individual well-being (Figure 1). The quality of the lighting in any given installation is determined in the balance of these (sometimes conflicting) dimensions. One central concern for the lighting community is the means by which we describe the luminous conditions in a space. Guidelines for lighting practice, for example, use photometric criteria to describe the minimum conditions required for good lighting practice (which, one hopes, will also achieve good lighting quality). There is general agreement that luminance, luminance distribution, uniformity, flicker rate, and spectral power distribution describe most of these quantities. Lighting system characteristics such as individual control, indirect lighting, and the use of daylight are also thought to contribute to good-quality lighting. Generally speaking, lighting practitioners strive to create luminous conditions that will optimize outcomes for individuals, but often with the intent of optimizing organizational outcomes such as retail sales and the clients bottom-line. These luminous conditions, like other environmental conditions, have effects on people that differ depending on individual characteristics such as age and sensitivity, and they may do so through any of several mechanisms (Figure 2). The mechanisms are theoretical constructs, useful to many scientific disciplines because of their utility for organizing empirical evidence and inductive reasoning into explanatory systems. For convenience, these parallel processes can be divided into two broad categories: psychological processes, which are the focus of this paper, and psychobiological processes such as visibility, photobiology, and arousal. The knowledge of these specific mechanisms that will arise from research will enable precise predictions about expected outcomes, predictions drawn from lighting research and from other topic areas that investigate the same processes. The manner of presentation in Figure 2 is deliberate, in that the lighting conditions are concurrent, the individual and group processes occur simultaneously, and the outcomes occur concurrently. For any individual at any one time, behavior is influenced by many causes. In the review, the sections devoted to each process are structured by the luminous conditions addressed in the body of research identified in our literature search. The focus of this paper is on the scientific evidence concerning the intervening psychological mechanisms that produce behavioral effects in response to luminous conditions (an earlier attempt is given in Veitch and Newsham). The goal is to describe the state of knowledge about these effects both to serve as initial input to revised recommended practice documents based on the lighting quality framework, and to guide future research. Others, with experience in lighting design, daylighting, luminaire design, and energy use, are better positioned to contribute to the discussion of the other influences on lighting quality diagrammed in Figure 1. (For instance, Loe and Rowlands have opened the discussion on the broader design end energy issues related to achieving good-quality lighting.) This review is organized in terms of the intervening psychological processes that are believed to underlie the lighting-behavior relationship: perceived control, attention, environmental appraisal, and affect. This is not an exhaustive list of possible mechanisms, but a categorization of the principal ones currently in use. This set was chosen because of the frequency with which these mechanisms are invoked as explanations of lighting design choices. When we understand why certain luminous conditions produce the behavioral outcomes we desire, then we will be able to re-create those conditions, and those outcomes, more reliably. Within the section for each of the four psychological processes are subsections discussing the operation of that process in response to various luminous conditions: luminance/illuminance; uniformity across tasks; luminance distributions within rooms; glare; spectral power distribution; flicker; indirect lighting systems; and, windows and Psychology and Lighting Quality / 3 daylighting. These categories generally agree with the luminous conditions described in various recommended practice documents. Not every possible luminous condition has been studied in relation to each process, so the number of subsections varies for each psychological process (indeed, in the section on perceived control it was impossible to distinguish between studies on the basis of luminous conditions). Poor-quality research is a major impediment to understanding. Such criticism of lighting research is longstanding, recognizing its weaknesses in scientific procedure and statistical analysis. Nonetheless, the existing body of writing about lighting has formed the basis for lighting education and recommended practice documents, Therefore, the review is inclusive, including empirical work from many sources that might be familiar to readers. A detailed critique of each article is beyond the scope of this paper. Instead, major limitations are noted where appropriate, and those conclusions are drawn that were judged supportable from the data provided. Economic considerations have driven much lighting research, with the result that the vast majority of investigations have considered lighting for offices, with relatively few investigations occurring in other settings. Accordingly, this review focuses on office lighting applications, although studies from other settings are included where their results are relevant. This review is limited to suprathreshold viewing conditions at adaptation levels typical of interiors. Psychological processes in lighting-behavior relationships Perceived Control Among the classic observations in modern psychology is the finding that perceived control can moderate stress reactions. Glass and Singer found that when people were given the opportunity to end an aversive noise (although they did not use it), they did not experience the negative aftereffects on task performance that were observed in people who had not had the opportunity. When control over aversive stimuli is not available, learned helplessness can result, in which individuals suffer emotional, cognitive, and behavioral deficits. Furthermore, belief that the absence of control leads to feelings of unhappiness and powerlessness is widespread. This belief is often used to justify the adoption of individual lighting controls, although few empirical investigations exist to justify this additional expense. Field surveys are consistent in reporting that a sizable percentage of office employees prefer to have some degree of control over their office lighting. Fifty-four per cent reported this in a large North American survey, and 67% in one Midwestern US building. Lighting designers apparently believe that providing individualized lighting controls has far-reaching benefits for occupants. However, perceived control does not always lead to desirable outcomes. Burger observed that people will decline control when it carries the risk of not achieving a desired goal or if it creates uncomfortable concern with self-presentation. If an expert seems more likely to make the correct choice, or if one risks looking foolish by making the wrong choice, this will be a choice that one does not want. Wineman concurred that in certain situations, control can lead to undesired effects if it requires choices one did not wish to make. One experiment to date concerning the performance effects of perceived control over lighting obtained results consistent with this pattern. The experiment varied the degree of control available for wor",
"title": ""
}
] |
scidocsrr
|
a74e2c3798f7a14f0a498802fb5cd275
|
Improving trace accuracy through data-driven configuration and composition of tracing features
|
[
{
"docid": "f391c56dd581d965548062944200e95f",
"text": "We present a traceability recovery method and tool based on latent semantic indexing (LSI) in the context of an artefact management system. The tool highlights the candidate links not identified yet by the software engineer and the links identified but missed by the tool, probably due to inconsistencies in the usage of domain terms in the traced software artefacts. We also present a case study of using the traceability recovery tool on software artefacts belonging to different categories of documents, including requirement, design, and testing documents, as well as code components.",
"title": ""
}
] |
[
{
"docid": "dd9942a62311e363d4b3641324dbd96a",
"text": "A series of diurnal airborne campaigns were conducted over an orchard field to assess the canopy Photochemical Reflectance Index (PRI) as an indicator of water stress. Airborne campaigns over two years were conducted with the Airborne Hyperspectral Scanner (AHS) over an orchard field to investigate changes in PRI, in the Transformed Chlorophyll Absorption in Reflectance Index (TCARI) normalized by the Optimized SoilAdjusted Vegetation Index (OSAVI) (TCARI/OSAVI), and in the Normalized Difference Vegetation Index (NDVI) as function of field-measured physiological indicators of water stress, such as stomatal conductance, stem water potential, steady-state fluorescence, and crown temperature. The AHS sensor was flown at three times on each 2004 and 2005 years, collecting 2 m spatial resolution imagery in 80 spectral bands in the 0.43– 12.5 μm spectral range. Indices PRI, TCARI/OSAVI, and NDVI were calculated from reflectance bands, and thermal bands were assessed for the retrieval of land surface temperature, separating pure crowns from shadows and sunlit soil pixels. The Photochemical Reflectance Index, originally developed for xanthophyll cycle pigment change detection was calculated to assess its relationship with water stress at a canopy level, and more important, to assess canopy structural and viewing geometry effects for water stress detection in diurnal airborne experiments. The FLIGHT 3D canopy reflectance model was used to simulate the bi-directional reflectance changes as function of the viewing geometry, background and canopy structure. This manuscript demonstrates that the airborne-level PRI index is sensitive to the de-epoxidation of the xanthophyll pigment cycle caused by water stress levels, but affected diurnally by the confounding effects of BRDF. Among the three vegetation indices calculated, only airborne PRI demonstrated sensitivity to diurnal changes in physiological indicators of water stress, such as canopy temperature minus air temperature (Tc–Ta), stomatal conductance (G), and stem water potential (ψ) measured in the field at the time of each image acquisition. No relationships were found from the diurnal experiments between NDVI and TCARI/OSAVI with the tree-measured physiological measures. FLIGHT model simulations of PRI demonstrated that PRI is highly affected by the canopy structure and background. © 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "2c2942905010e71cda5f8b0f41cf2dd0",
"text": "1 Focus and anaphoric destressing Consider a pronunciation of (1) with prominence on the capitalized noun phrases. In terms of a relational notion of prominence, the subject NP she] is prominent within the clause S she beats me], and NP Sue] is prominent within the clause S Sue beats me]. This prosody seems to have the pragmatic function of putting the two clauses into opposition, with prominences indicating where they diier, and prosodic reduction of the remaining parts indicating where the clauses are invariant. (1) She beats me more often than Sue beats me Car84], Roc86] and Roo92] propose theories of focus interpretation which formalize the idea just outlined. Under my assumptions, the prominences are the correlates of a syntactic focus features on the two prominent NPs, written as F subscripts. Further, the grammatical representation of (1) includes operators which interpret the focus features at the level of the minimal dominating S nodes. In the logical form below, each focus feature is interpreted by an operator written .",
"title": ""
},
{
"docid": "db0cac6172c63eb5b91b2e29d037cc63",
"text": "In this article, we address open challenges in large-scale classification, focusing on how to effectively leverage the dependency structures (hierarchical or graphical) among class labels, and how to make the inference scalable in jointly optimizing all model parameters. We propose two main approaches, namely the hierarchical Bayesian inference framework and the recursive regularization scheme. The key idea in both approaches is to reinforce the similarity among parameter across the nodes in a hierarchy or network based on the proximity and connectivity of the nodes. For scalability, we develop hierarchical variational inference algorithms and fast dual coordinate descent training procedures with parallelization. In our experiments for classification problems with hundreds of thousands of classes and millions of training instances with terabytes of parameters, the proposed methods show consistent and statistically significant improvements over other competing approaches, and the best results on multiple benchmark datasets for large-scale classification.",
"title": ""
},
{
"docid": "15a0898247365fa5ff29fd54560f547d",
"text": "SemEval 2018 Task 7 focuses on relation extraction and classification in scientific literature. In this work, we present our tree-based LSTM network for this shared task. Our approach placed 9th (of 28) for subtask 1.1 (relation classification), and 5th (of 20) for subtask 1.2 (relation classification with noisy entities). We also provide an ablation study of features included as input to the network.",
"title": ""
},
{
"docid": "b759613b1eedd29d32fbbc118767b515",
"text": "Deep learning has been shown successful in a number of domains, ranging from acoustics, images to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, a significant amount of research efforts have been devoted to this area, greatly advancing graph analyzing techniques. In this survey, we comprehensively review different kinds of deep learning methods applied to graphs. We divide existing methods into three main categories: semi-supervised methods including Graph Neural Networks and Graph Convolutional Networks, unsupervised methods including Graph Autoencoders, and recent advancements including Graph Recurrent Neural Networks and Graph Reinforcement Learning. We then provide a comprehensive overview of these methods in a systematic manner following their history of developments. We also analyze the differences of these methods and how to composite different architectures. Finally, we briefly outline their applications and discuss potential future directions.",
"title": ""
},
{
"docid": "15709a8aecbf8f4f35bf47b79c3dca03",
"text": "We introduce a new approach to hierarchy formation and task decomposition in hierarchical reinforcement learning. Our method is based on the Hierarchy Of Abstract Machines (HAM) framework because HAM approach is able to design efficient controllers that will realize specific behaviors in real robots. The key to our algorithm is the introduction of the internal or “mental” environment in which the state represents the structure of the HAM hierarchy. The internal action in this environment leads to changes the hierarchy of HAMs. We propose the classical Qlearning procedure in the internal environment which allows the agent to obtain an optimal hierarchy. We extends the HAM framework by adding on-model approach to select the appropriate sub-machine to execute action sequences for certain class of external environment states. Preliminary experiments demonstrated the prospects of the method.",
"title": ""
},
{
"docid": "483c87e4ad58596f4651e4e63c501579",
"text": "Chitosan, a polyaminosaccharide obtained by alkaline deacetylation of chitin, possesses useful properties including biodegradability, biocompatibility, low toxicity, and good miscibility with other polymers. It is extensively used in many applications in biology, medicine, agriculture, environmental protection, and the food and pharmaceutical industries. The amino and hydroxyl groups present in the chitosan backbone provide positions for modifications that are influenced by factors such as the molecular weight, viscosity, and type of chitosan, as well as the reaction conditions. The modification of chitosan by chemical methods is of interest because the basic chitosan skeleton is not modified and the process results in new or improved properties of the material. Among the chitosan derivatives, cyclodextrin-grafted chitosan and poly(ethylene glycol)-grafted chitosan are excellent candidates for a range of biomedical, environmental decontamination, and industrial purposes. This work discusses modifications including chitosan with attached cyclodextrin and poly(ethylene glycol), and the main applications of these chitosan derivatives in the biomedical field.",
"title": ""
},
{
"docid": "4bec71105c8dca3d0b48e99cdd4e809a",
"text": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.",
"title": ""
},
{
"docid": "19d79b136a9af42ac610131217de8c08",
"text": "The aim of the experimental study described in this article is to investigate the effect of a lifelike character with subtle expressivity on the affective state of users. The character acts as a quizmaster in the context of a mathematical game. This application was chosen as a simple, and for the sake of the experiment, highly controllable, instance of human–computer interfaces and software. Subtle expressivity refers to the character’s affective response to the user’s performance by emulating multimodal human–human communicative behavior such as different body gestures and varying linguistic style. The impact of em-pathic behavior, which is a special form of affective response, is examined by deliberately frustrating the user during the game progress. There are two novel aspects in this investigation. First, we employ an animated interface agent to address the affective state of users rather than a text-based interface, which has been used in related research. Second, while previous empirical studies rely on questionnaires to evaluate the effect of life-like characters, we utilize physiological information of users (in addition to questionnaire data) in order to precisely associate the occurrence of interface events with users’ autonomic nervous system activity. The results of our study indicate that empathic character response can significantly decrease user stress see front matter r 2004 Elsevier Ltd. All rights reserved. .ijhcs.2004.11.009 cle is a significantly revised and extended version of Prendinger et al. (2003). nding author. Tel.: +813 4212 2650; fax: +81 3 3556 1916. dresses: [email protected] (H. Prendinger), [email protected] (J. Mori), v.t.u-tokyo.ac.jp (M. Ishizuka).",
"title": ""
},
{
"docid": "72b246820952b752bd001212e5f0dd2e",
"text": "This paper presents an attribute and-or grammar (A-AOG) model for jointly inferring human body pose and human attributes in a parse graph with attributes augmented to nodes in the hierarchical representation. In contrast to other popular methods in the current literature that train separate classifiers for poses and individual attributes, our method explicitly represents the decomposition and articulation of body parts, and account for the correlations between poses and attributes. The A-AOG model is an amalgamation of three traditional grammar formulations: (i) Phrase structure grammar representing the hierarchical decomposition of the human body from whole to parts; (ii) Dependency grammar modeling the geometric articulation by a kinematic graph of the body pose; and (iii) Attribute grammar accounting for the compatibility relations between different parts in the hierarchy so that their appearances follow a consistent style. The parse graph outputs human detection, pose estimation, and attribute prediction simultaneously, which are intuitive and interpretable. We conduct experiments on two tasks on two datasets, and experimental results demonstrate the advantage of joint modeling in comparison with computing poses and attributes independently. Furthermore, our model obtains better performance over existing methods for both pose estimation and attribute prediction tasks.",
"title": ""
},
{
"docid": "e8e8e6d288491e715177a03601500073",
"text": "Protein–protein interactions constitute the regulatory network that coordinates diverse cellular functions. Co-immunoprecipitation (co-IP) is a widely used and effective technique to study protein–protein interactions in living cells. However, the time and cost for the preparation of a highly specific antibody is the major disadvantage associated with this technique. In the present study, a co-IP system was developed to detect protein–protein interactions based on an improved protoplast transient expression system by using commercially available antibodies. This co-IP system eliminates the need for specific antibody preparation and transgenic plant production. Leaf sheaths of rice green seedlings were used for the protoplast transient expression system which demonstrated high transformation and co-transformation efficiencies of plasmids. The transient expression system developed by this study is suitable for subcellular localization and protein detection. This work provides a rapid, reliable, and cost-effective system to study transient gene expression, protein subcellular localization, and characterization of protein–protein interactions in vivo.",
"title": ""
},
{
"docid": "76d029c669e84e420c8513bd837fb59b",
"text": "Since its original publication, the Semi-Global Matching (SGM) technique has been re-implemented by many researchers and companies. The method offers a very good trade off between runtime and accuracy, especially at object borders and fine structures. It is also robust against radiometric differences and not sensitive to the choice of parameters. Therefore, it is well suited for solving practical problems. The applications reach from remote sensing, like deriving digital surface models from aerial and satellite images, to robotics and driver assistance systems. This paper motivates and explains the method, shows current developments as well as examples from various applications.",
"title": ""
},
{
"docid": "fb6d89e2faee942a0a92ded6ead0d8c7",
"text": "Each relationship has its own personality. Almost immediately after a social interaction begins, verbal and nonverbal behaviors become synchronized. Even in asocial contexts, individuals tend to produce utterances that match the grammatical structure of sentences they have recently heard or read. Three projects explore language style matching (LSM) in everyday writing tasks and professional writing. LSM is the relative use of 9 function word categories (e.g., articles, personal pronouns) between any 2 texts. In the first project, 2 samples totaling 1,744 college students answered 4 essay questions written in very different styles. Students automatically matched the language style of the target questions. Overall, the LSM metric was internally consistent and reliable across writing tasks. Women, participants of higher socioeconomic status, and students who earned higher test grades matched with targets more than others did. In the second project, 74 participants completed cliffhanger excerpts from popular fiction. Judges' ratings of excerpt-response similarity were related to content matching but not function word matching, as indexed by LSM. Further, participants were not able to intentionally increase style or content matching. In the final project, an archival study tracked the professional writing and personal correspondence of 3 pairs of famous writers across their relationships. Language matching in poetry and letters reflected fluctuations in the relationships of 3 couples: Sigmund Freud and Carl Jung, Elizabeth Barrett and Robert Browning, and Sylvia Plath and Ted Hughes. Implications for using LSM as an implicit marker of social engagement and influence are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved).",
"title": ""
},
{
"docid": "9fb9664eea84d3bc0f59f7c4714debc1",
"text": "International research has shown that users are complacent when it comes to smartphone security behaviour. This is contradictory, as users perceive data stored on the `smart' devices to be private and worth protecting. Traditionally less attention is paid to human factors compared to technical security controls (such as firewalls and antivirus), but there is a crucial need to analyse human aspects as technology alone cannot deliver complete security solutions. Increasing a user's knowledge can improve compliance with good security practices, but for trainers and educators to create meaningful security awareness materials they must have a thorough understanding of users' existing behaviours, misconceptions and general attitude towards smartphone security.",
"title": ""
},
{
"docid": "ea624ba3a83c4f042fb48f4ebcba705a",
"text": "Using magnetic field data as fingerprints for smartphone indoor positioning has become popular in recent years. Particle filter is often used to improve accuracy. However, most of existing particle filter based approaches either are heavily affected by motion estimation errors, which result in unreliable systems, or impose strong restrictions on smartphone such as fixed phone orientation, which are not practical for real-life use. In this paper, we present a novel indoor positioning system for smartphones, which is built on our proposed reliability-augmented particle filter. We create several innovations on the motion model, the measurement model, and the resampling model to enhance the basic particle filter. To minimize errors in motion estimation and improve the robustness of the basic particle filter, we propose a dynamic step length estimation algorithm and a heuristic particle resampling algorithm. We use a hybrid measurement model, combining a new magnetic fingerprinting model and the existing magnitude fingerprinting model, to improve system performance, and importantly avoid calibrating magnetometers for different smartphones. In addition, we propose an adaptive sampling algorithm to reduce computation overhead, which in turn improves overall usability tremendously. Finally, we also analyze the “Kidnapped Robot Problem” and present a practical solution. We conduct comprehensive experimental studies, and the results show that our system achieves an accuracy of 1~2 m on average in a large building.",
"title": ""
},
{
"docid": "a7d9ac415843146b82139e50edf4ccf2",
"text": "Recommender Systems (RSs) are software tools and techniques providing suggestions of relevant items to users. These systems have received increasing attention from both academy and industry since the 90’s, due to a variety of practical applications as well as complex problems to solve. Since then, the number of research papers published has increased significantly in many application domains (books, documents, images, movies, music, shopping, TV programs, and others). One of these domains has our attention in this paper due to the massive proliferation of televisions (TVs) with computational and network capabilities and due to the large amount of TV content and TV-related content available on the Web. With the evolution of TVs and RSs, the diversity of recommender systems for TV has increased substantially. In this direction, it is worth mentioning that we consider “recommender systems for TV” as those that make recommendations of both TV-content and any content related to TV. Due to this diversity, more investigation is necessary because research on recommender systems for TV domain is still broader and less mature than in other research areas. Thus, this literature review (LR) seeks to classify, synthesize, and present studies according to different perspectives of RSs in the television domain. For that, we initially identified, from the scientific literature, 282 relevant papers published from 2003 to May, 2015. The papers were then categorized and discussed according to different research and development perspectives: recommended item types, approaches, algorithms, architectural models, output devices, user profiling and evaluation. The obtained results can be useful to reveal trends and opportunities for both researchers and practitioners in the area.",
"title": ""
},
{
"docid": "20710cf5fac30800217c5b9568d3541a",
"text": "BACKGROUND\nAcne scarring is treatable by a variety of modalities. Ablative carbon dioxide laser (ACL), while effective, is associated with undesirable side effect profiles. Newer modalities using the principles of fractional photothermolysis (FP) produce modest results than traditional carbon dioxide (CO(2)) lasers but with fewer side effects. A novel ablative CO(2) laser device use a technique called ablative fractional resurfacing (AFR), combines CO(2) ablation with a FP system. This study was conducted to compare the efficacy of Q-switched 1064-nm Nd: YAG laser and that of fractional CO(2) laser in the treatment of patients with moderate to severe acne scarring.\n\n\nMETHODS\nSixty four subjects with moderate to severe facial acne scars were divided randomly into two groups. Group A received Q-Switched 1064-nm Nd: YAG laser and group B received fractional CO(2) laser. Two groups underwent four session treatment with laser at one month intervals. Results were evaluated by patients based on subjective satisfaction and physicians' assessment and photo evaluation by two blinded dermatologists. Assessments were obtained at baseline and at three and six months after final treatment.\n\n\nRESULTS\nPost-treatment side effects were mild and transient in both groups. According to subjective satisfaction (p = 0.01) and physicians' assessment (p < 0.001), fractional CO(2) laser was significantly more effective than Q- Switched 1064- nm Nd: YAG laser.\n\n\nCONCLUSIONS\nFractional CO2 laser has the most significant effect on the improvement of atrophic facial acne scars, compared with Q-Switched 1064-nm Nd: YAG laser.",
"title": ""
},
{
"docid": "0aa7a61ae2d73b017b5acdd885d7c0ef",
"text": "3GPP Long Term Evolution-Advanced (LTE-A) aims at enhancement of LTE performance in many respects including the system capacity and network coverage. This enhancement can be accomplished by heterogeneous networks (HetNets) where additional micro-nodes that require lower transmission power are efficiently deployed. More careful management of mobility and handover (HO) might be required in HetNets compared to homogeneous networks where all nodes require the same transmission power. In this article, we provide a technical overview of mobility and HO management for HetNets in LTEA. Moreover, we investigate the A3-event which requires a certain criterion to be met for HO. The criterion involves the reference symbol received power/quality of user equipment (UE), hysteresis margin, and a number of offset parameters based on proper HO timing, i.e., time-to-trigger (TTT). Optimum setting of these parameters are not trivial task, and has to be determined depending on UE speed, propagation environment, system load, deployed HetNets configuration, etc. Therefore, adaptive TTT values with given hysteresis margin for the lowest ping pong rate within 2 % of radio link failure rate depending on UE speed and deployed HetNets configuration are investigated in this article.",
"title": ""
},
{
"docid": "9172d4ba2e86a7d4918ef64d7b837084",
"text": "Electromagnetic generators (EMGs) and triboelectric nanogenerators (TENGs) are the two most powerful approaches for harvesting ambient mechanical energy, but the effectiveness of each depends on the triggering frequency. Here, after systematically comparing the performances of EMGs and TENGs under low-frequency motion (<5 Hz), we demonstrated that the output performance of EMGs is proportional to the square of the frequency, while that of TENGs is approximately in proportion to the frequency. Therefore, the TENG has a much better performance than that of the EMG at low frequency (typically 0.1-3 Hz). Importantly, the extremely small output voltage of the EMG at low frequency makes it almost inapplicable to drive any electronic unit that requires a certain threshold voltage (∼0.2-4 V), so that most of the harvested energy is wasted. In contrast, a TENG has an output voltage that is usually high enough (>10-100 V) and independent of frequency so that most of the generated power can be effectively used to power the devices. Furthermore, a TENG also has advantages of light weight, low cost, and easy scale up through advanced structure designs. All these merits verify the possible killer application of a TENG for harvesting energy at low frequency from motions such as human motions for powering small electronics and possibly ocean waves for large-scale blue energy.",
"title": ""
},
{
"docid": "f83b5593f24eb3ac549699d2d43f7e8a",
"text": "As economic globalization intensifies competition and creates a climate of constant change, winning and keeping customers has never been more important. Nowadays, Banks have realized that customer relationships are a very important factor for their success. Customer relationship management (CRM) is a strategy that can help them to build long-lasting relationships with their customers and increase their revenues and profits. CRM in the banking sector is of greater importance. The aim of this study is to explore and analyze the strategic implementation of CRM in selected banks of Pakistan, identify the benefits, the problems, as well as the success and failure factors of the implementation and develop a better understanding of CRM impact on banking competitiveness as well as provide a greater understanding of what constitutes good CRM practices. In this study, CMAT (Customer Management Assessment Tool) model is used which encompasses all the essential elements of practical customer relationship management. Data is collected through questionnaires from the three major banks (HBL, MCB, and Citibank) of Pakistan. The evidence supports that CRM is gradually being practiced in studied banks; however the true spirit of CRM is still needed to be on the active agenda of the banking sector in Pakistan. This study contributes to the financial services literature as it is one of the very few that have examined CRM applications, a comparatively new technology, in the Pakistani banking sector, where very limited research has taken place on the implementation of CRM.",
"title": ""
}
] |
scidocsrr
|
46b9ef0704e87a27b376720750fb1259
|
Predicting taxi demand at high spatial resolution: Approaching the limit of predictability
|
[
{
"docid": "d0bb1b3fc36016b166eb9ed25cb7ee61",
"text": "Informed driving is increasingly becoming a key feature for increasing the sustainability of taxi companies. The sensors that are installed in each vehicle are providing new opportunities for automatically discovering knowledge, which, in return, delivers information for real-time decision making. Intelligent transportation systems for taxi dispatching and for finding time-saving routes are already exploring these sensing data. This paper introduces a novel methodology for predicting the spatial distribution of taxi-passengers for a short-term time horizon using streaming data. First, the information was aggregated into a histogram time series. Then, three time-series forecasting techniques were combined to originate a prediction. Experimental tests were conducted using the online data that are transmitted by 441 vehicles of a fleet running in the city of Porto, Portugal. The results demonstrated that the proposed framework can provide effective insight into the spatiotemporal distribution of taxi-passenger demand for a 30-min horizon.",
"title": ""
},
{
"docid": "b294ca2034fa4133e8f7091426242244",
"text": "The development of a city gradually fosters different functional regions, such as educational areas and business districts. In this paper, we propose a framework (titled DRoF) that Discovers Regions of different Functions in a city using both human mobility among regions and points of interests (POIs) located in a region. Specifically, we segment a city into disjointed regions according to major roads, such as highways and urban express ways. We infer the functions of each region using a topic-based inference model, which regards a region as a document, a function as a topic, categories of POIs (e.g., restaurants and shopping malls) as metadata (like authors, affiliations, and key words), and human mobility patterns (when people reach/leave a region and where people come from and leave for) as words. As a result, a region is represented by a distribution of functions, and a function is featured by a distribution of mobility patterns. We further identify the intensity of each function in different locations. The results generated by our framework can benefit a variety of applications, including urban planning, location choosing for a business, and social recommendations. We evaluated our method using large-scale and real-world datasets, consisting of two POI datasets of Beijing (in 2010 and 2011) and two 3-month GPS trajectory datasets (representing human mobility) generated by over 12,000 taxicabs in Beijing in 2010 and 2011 respectively. The results justify the advantages of our approach over baseline methods solely using POIs or human mobility.",
"title": ""
}
] |
[
{
"docid": "c5122000c9d8736cecb4d24e6f56aab8",
"text": "New credit cards containing Europay, MasterCard and Visa (EMV) chips for enhanced security used in-store purchases rather than online purchases have been adopted considerably. EMV supposedly protects the payment cards in such a way that the computer chip in a card referred to as chip-and-pin cards generate a unique one time code each time the card is used. The one time code is designed such that if it is copied or stolen from the merchant system or from the system terminal cannot be used to create a counterfeit copy of that card or counterfeit chip of the transaction. However, in spite of this design, EMV technology is not entirely foolproof from failure. In this paper we discuss the issues, failures and fraudulent cases associated with EMV Chip-And-Card technology.",
"title": ""
},
{
"docid": "8688f904ff190f9434cf20c6fc0f7eb9",
"text": "3-D shape analysis has attracted extensive research efforts in recent years, where the major challenge lies in designing an effective high-level 3-D shape feature. In this paper, we propose a multi-level 3-D shape feature extraction framework by using deep learning. The low-level 3-D shape descriptors are first encoded into geometric bag-of-words, from which middle-level patterns are discovered to explore geometric relationships among words. After that, high-level shape features are learned via deep belief networks, which are more discriminative for the tasks of shape classification and retrieval. Experiments on 3-D shape recognition and retrieval demonstrate the superior performance of the proposed method in comparison to the state-of-the-art methods.",
"title": ""
},
{
"docid": "36e4260c43efca5a67f99e38e5dbbed8",
"text": "The inherent compliance of soft fluidic actuators makes them attractive for use in wearable devices and soft robotics. Their flexible nature permits them to be used without traditional rotational or prismatic joints. Without these joints, however, measuring the motion of the actuators is challenging. Actuator-level sensors could improve the performance of continuum robots and robots with compliant or multi-degree-of-freedom joints. We make the reinforcing braid of a pneumatic artificial muscle (PAM or McKibben muscle) “smart” by weaving it from conductive insulated wires. These wires form a solenoid-like circuit with an inductance that more than doubles over the PAM contraction. The reinforcing and sensing fibers can be used to measure the contraction of a PAM actuator with a simple linear function of the measured inductance, whereas other proposed self-sensing techniques rely on the addition of special elastomers or transducers, the technique presented in this paper can be implemented without modifications of this kind. We present and experimentally validate two models for Smart Braid sensors based on the long solenoid approximation and the Neumann formula, respectively. We test a McKibben muscle made from a Smart Braid in quasi-static conditions with various end loads and in dynamic conditions. We also test the performance of the Smart Braid sensor alongside steel.",
"title": ""
},
{
"docid": "1ca692464d5d7f4e61647bf728941519",
"text": "During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RS(C)) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RS(C) neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses.",
"title": ""
},
{
"docid": "e444dcc97882005658aca256991e816e",
"text": "The terms superordinate, hyponym, and subordinate designate the hierarchical taxonomic relationship of words. They also represent categories and concepts. This relationship is a subject of interest for anthropology, cognitive psychology, psycholinguistics, linguistic semantics, and cognitive linguistics. Taxonomic hierarchies are essentially classificatory systems, and they are supposed to reflect the way that speakers of a language categorize the world of experience. A well-formed taxonomy offers an orderly and efficient set of categories at different levels of specificity (Cruse 2000:180). However, the terms and levels of taxonomic hierarchy used in each discipline vary. This makes it difficult to carry out cross-disciplinary readings on the hierarchical taxonomy of words or categories, which act as an interface in these cognitive-based cross-disciplinary ventures. Not only words— terms and concepts differ but often the nature of the problem is compounded as some terms refer to differing word classes, categories and concepts at the same time. Moreover, the lexical relationship of terms among these lexical hierarchies is far from clear. As a result two lines of thinking can be drawn from the literature: (1) technical terms coined for the hierarchical relationship of words are conflicting and do not reflect reality or environment, and (2) the relationship among these hierarchies of word levels and the underlying principles followed to explain them are uncertain except that of inclusion.",
"title": ""
},
{
"docid": "06ba81270357c9bcf1dd8f1871741537",
"text": "The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of “listening” to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using",
"title": ""
},
{
"docid": "4fe39e3d2e7c04263e9015c773a755fb",
"text": "This paper presents a novel approach to building natural language interface to databases (NLIDB) based on Computational Paninian Grammar (CPG). It uses two distinct stages of processing, namely, syntactic processing followed by semantic processing. Syntactic processing makes the processing more general and robust. CPG is a dependency framework in which the analysis is in terms of syntactico-semantic relations. The closeness of these relations makes semantic processing easier and more accurate. It also makes the systems more portable.",
"title": ""
},
{
"docid": "9548bd2e37fdd42d09dc6828ac4675f9",
"text": "Recent years have seen increasing interest in ranking elite athletes and teams in professional sports leagues, and in predicting the outcomes of games. In this work, we draw an analogy between this problem and one in the field of search engine optimization, namely, that of ranking webpages on the Internet. Motivated by the famous PageRank algorithm, our TeamRank methods define directed graphs of sports teams based on the observed outcomes of individual games, and use these networks to infer the importance of teams that determines their rankings. In evaluating these methods on data from recent seasons in the National Football League (NFL) and National Basketball Association (NBA), we find that they can predict the outcomes of games with up to 70% accuracy, and that they provide useful rankings of teams that cluster by league divisions. We also propose some extensions to TeamRank that consider overall team win records and shifts in momentum over time.",
"title": ""
},
{
"docid": "54dc81aca62267eecf1f5f8a8ace14b9",
"text": "Advances in deep learning have led to substantial increases in prediction accuracy but have been accompanied by increases in the cost of rendering predictions. We conjecture that for a majority of real-world inputs, the recent advances in deep learning have created models that effectively “over-think” on simple inputs. In this paper we revisit the classic question of building model cascades that primarily leverage class asymmetry to reduce cost. We introduce the “I Don’t Know” (IDK) prediction cascades framework, a general framework to systematically compose a set of pre-trained models to accelerate inference without a loss in prediction accuracy. We propose two search based methods for constructing cascades as well as a new cost-aware objective within this framework. The proposed IDK cascade framework can be easily adopted in the existing model serving systems without additional model retraining. We evaluate the proposed techniques on a range of benchmarks to demonstrate the effectiveness of the proposed framework.",
"title": ""
},
{
"docid": "a58cbbff744568ae7abd2873d04d48e9",
"text": "Training real-world Deep Neural Networks (DNNs) can take an eon (i.e., weeks or months) without leveraging distributed systems. Even distributed training takes inordinate time, of which a large fraction is spent in communicating weights and gradients over the network. State-of-the-art distributed training algorithms use a hierarchy of worker-aggregator nodes. The aggregators repeatedly receive gradient updates from their allocated group of the workers, and send back the updated weights. This paper sets out to reduce this significant communication cost by embedding data compression accelerators in the Network Interface Cards (NICs). To maximize the benefits of in-network acceleration, the proposed solution, named INCEPTIONN (In-Network Computing to Exchange and Process Training Information Of Neural Networks), uniquely combines hardware and algorithmic innovations by exploiting the following three observations. (1) Gradients are significantly more tolerant to precision loss than weights and as such lend themselves better to aggressive compression without the need for the complex mechanisms to avert any loss. (2) The existing training algorithms only communicate gradients in one leg of the communication, which reduces the opportunities for in-network acceleration of compression. (3) The aggregators can become a bottleneck with compression as they need to compress/decompress multiple streams from their allocated worker group. To this end, we first propose a lightweight and hardware-friendly lossy-compression algorithm for floating-point gradients, which exploits their unique value characteristics. This compression not only enables significantly reducing the gradient communication with practically no loss of accuracy, but also comes with low complexity for direct implementation as a hardware block in the NIC. To maximize the opportunities for compression and avoid the bottleneck at aggregators, we also propose an aggregator-free training algorithm that exchanges gradients in both legs of communication in the group, while the workers collectively perform the aggregation in a distributed manner. Without changing the mathematics of training, this algorithm leverages the associative property of the aggregation operator and enables our in-network accelerators to (1) apply compression for all communications, and (2) prevent the aggregator nodes from becoming bottlenecks. Our experiments demonstrate that INCEPTIONN reduces the communication time by 70.9~80.7% and offers 2.2~3.1x speedup over the conventional training system, while achieving the same level of accuracy.",
"title": ""
},
{
"docid": "acf514a4aa34487121cc853e55ceaed4",
"text": "Stereotype threat spillover is a situational predicament in which coping with the stress of stereotype confirmation leaves one in a depleted volitional state and thus less likely to engage in effortful self-control in a variety of domains. We examined this phenomenon in 4 studies in which we had participants cope with stereotype and social identity threat and then measured their performance in domains in which stereotypes were not \"in the air.\" In Study 1 we examined whether taking a threatening math test could lead women to respond aggressively. In Study 2 we investigated whether coping with a threatening math test could lead women to indulge themselves with unhealthy food later on and examined the moderation of this effect by personal characteristics that contribute to identity-threat appraisals. In Study 3 we investigated whether vividly remembering an experience of social identity threat results in risky decision making. Finally, in Study 4 we asked whether coping with threat could directly influence attentional control and whether the effect was implemented by inefficient performance monitoring, as assessed by electroencephalography. Our results indicate that stereotype threat can spill over and impact self-control in a diverse array of nonstereotyped domains. These results reveal the potency of stereotype threat and that its negative consequences might extend further than was previously thought.",
"title": ""
},
{
"docid": "3a95b876619ce4b666278810b80cae77",
"text": "On 14 November 2016, northeastern South Island of New Zealand was struck by a major moment magnitude (Mw) 7.8 earthquake. Field observations, in conjunction with interferometric synthetic aperture radar, Global Positioning System, and seismology data, reveal this to be one of the most complex earthquakes ever recorded. The rupture propagated northward for more than 170 kilometers along both mapped and unmapped faults before continuing offshore at the island’s northeastern extent. Geodetic and field observations reveal surface ruptures along at least 12 major faults, including possible slip along the southern Hikurangi subduction interface; extensive uplift along much of the coastline; and widespread anelastic deformation, including the ~8-meter uplift of a fault-bounded block. This complex earthquake defies many conventional assumptions about the degree to which earthquake ruptures are controlled by fault segmentation and should motivate reevaluation of these issues in seismic hazard models.",
"title": ""
},
{
"docid": "64c44342abbce474e21df67c0a5cc646",
"text": "In this paper it is shown that the principal eigenvector is a necessary representation of the priorities derived from a positive reciprocal pairwise comparison judgment matrix A 1⁄4 ðaijÞ when A is a small perturbation of a consistent matrix. When providing numerical judgments, an individual attempts to estimate sequentially an underlying ratio scale and its equivalent consistent matrix of ratios. Near consistent matrices are essential because when dealing with intangibles, human judgment is of necessity inconsistent, and if with new information one is able to improve inconsistency to near consistency, then that could improve the validity of the priorities of a decision. In addition, judgment is much more sensitive and responsive to large rather than to small perturbations, and hence once near consistency is attained, it becomes uncertain which coefficients should be perturbed by small amounts to transform a near consistent matrix to a consistent one. If such perturbations were forced, they could be arbitrary and thus distort the validity of the derived priority vector in representing the underlying decision. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "6e4798c01a0a241d1f3746cd98ba9421",
"text": "BACKGROUND\nLarge blood-based prospective studies can provide reliable assessment of the complex interplay of lifestyle, environmental and genetic factors as determinants of chronic disease.\n\n\nMETHODS\nThe baseline survey of the China Kadoorie Biobank took place during 2004-08 in 10 geographically defined regions, with collection of questionnaire data, physical measurements and blood samples. Subsequently, a re-survey of 25,000 randomly selected participants was done (80% responded) using the same methods as in the baseline. All participants are being followed for cause-specific mortality and morbidity, and for any hospital admission through linkages with registries and health insurance (HI) databases.\n\n\nRESULTS\nOverall, 512,891 adults aged 30-79 years were recruited, including 41% men, 56% from rural areas and mean age was 52 years. The prevalence of ever-regular smoking was 74% in men and 3% in women. The mean blood pressure was 132/79 mmHg in men and 130/77 mmHg in women. The mean body mass index (BMI) was 23.4 kg/m(2) in men and 23.8 kg/m(2) in women, with only 4% being obese (>30 kg/m(2)), and 3.2% being diabetic. Blood collection was successful in 99.98% and the mean delay from sample collection to processing was 10.6 h. For each of the main baseline variables, there is good reproducibility but large heterogeneity by age, sex and study area. By 1 January 2011, over 10,000 deaths had been recorded, with 91% of surviving participants already linked to HI databases.\n\n\nCONCLUSION\nThis established large biobank will be a rich and powerful resource for investigating genetic and non-genetic causes of many common chronic diseases in the Chinese population.",
"title": ""
},
{
"docid": "49387b129347f7255bf77ad9cc726275",
"text": "Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the “long tail” of this distribution requires enormous amounts of data. Representations of rare words trained directly on end-tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained against the end task. We show that this improves results against baselines where embeddings are trained on the end task in a reading comprehension task, a recognizing textual entailment task, and in language modelling.",
"title": ""
},
{
"docid": "ed351364658a99d4d9c10dd2b9be3c92",
"text": "Information technology continues to provide opportunities to alter the decisionmaking behavior of individuals, groups and organizations. Two related changes that are emerging are social media and Web 2.0 technologies. These technologies can positively and negatively impact the rationality and effectiveness of decision-making. For example, changes that help marketing managers alter consumer decision behavior may result in poorer decisions by consumers. Also, managers who heavily rely on a social network rather than expert opinion and facts may make biased decisions. A number of theories can help explain how social media may impact decision-making and the consequences.",
"title": ""
},
{
"docid": "0b705fc98638cf042e84417849259074",
"text": "G et al. [Gallego, G., G. Iyengar, R. Phillips, A. Dubey. 2004. Managing flexible products on a network. CORC Technical Report TR-2004-01, Department of Industrial Engineering and Operations Research, Columbia University, New York.] recently proposed a choice-based deterministic linear programming model (CDLP) for network revenue management (RM) that parallels the widely used deterministic linear programming (DLP) model. While they focused on analyzing “flexible products”—a situation in which the provider has the flexibility of using a collection of products (e.g., different flight times and/or itineraries) to serve the same market demand (e.g., an origin-destination connection)—their approach has broader implications for understanding choice-based RM on a network. In this paper, we explore the implications in detail. Specifically, we characterize optimal offer sets (sets of available network products) by extending to the network case a notion of “efficiency” developed by Talluri and van Ryzin [Talluri, K. T., G. J. van Ryzin. 2004. Revenue management under a general discrete choice model of consumer behavior. Management Sci. 50 15–33.] for the single-leg, choice-based RM problem. We show that, asymptotically, as demand and capacity are scaled up, only these efficient sets are used in an optimal policy. This analysis suggests that efficiency is a potentially useful approach for identifying “good” offer sets on networks, as it is in the case of single-leg problems. Second, we propose a practical decomposition heuristic for converting the static CDLP solution into a dynamic control policy. The heuristic is quite similar to the familiar displacement-adjusted virtual nesting (DAVN) approximation used in traditional network RM, and it significantly improves on the performance of the static LP solution. We illustrate the heuristic on several numerical examples.",
"title": ""
},
{
"docid": "7feda29a5edf6855895f91f80c3286a4",
"text": "The ability to conduct logical reasoning is a fundamental aspect of intelligent behavior, and thus an important problem along the way to human-level artificial intelligence. Traditionally, symbolic logic-based methods from the field of knowledge representation and reasoning have been used to equip agents with capabilities that resemble human logical reasoning qualities. More recently, however, there has been an increasing interest in using machine learning rather than symbolic logic-based formalisms to tackle these tasks. In this paper, we employ state-of-the-art methods for training deep neural networks to devise a novel model that is able to learn how to effectively perform logical reasoning in the form of basic ontology reasoning. This is an important and at the same time very natural logical reasoning task, which is why the presented approach is applicable to a plethora of important real-world problems. We present the outcomes of several experiments, which show that our model learned to perform precise ontology reasoning on diverse and challenging tasks. Furthermore, it turned out that the suggested approach suffers much less from different obstacles that prohibit logic-based symbolic reasoning, and, at the same time, is surprisingly plausible from a biological point of view.",
"title": ""
},
{
"docid": "9409922d01a00695745939b47e6446a0",
"text": "The Suricata intrusion-detection system for computer-network monitoring has been advanced as an open-source improvement on the popular Snort system that has been available for over a decade. Suricata includes multi-threading to improve processing speed beyond Snort. Previous work comparing the two products has not used a real-world setting. We did this and evaluated the speed, memory requirements, and accuracy of the detection engines in three kinds of experiments: (1) on the full traffic of our school as observed on its \" backbone\" in real time, (2) on a supercomputer with packets recorded from the backbone, and (3) in response to malicious packets sent by a red-teaming product. We used the same set of rules for both products with a few small exceptions where capabilities were missing. We conclude that Suricata can handle larger volumes of traffic than Snort with similar accuracy, and that its performance scaled roughly linearly with the number of processors up to 48. We observed no significant speed or accuracy advantage of Suricata over Snort in its current state, but it is still being developed. Our methodology should be useful for comparing other intrusion-detection products.",
"title": ""
},
{
"docid": "7065db83dbe470f430789ea8e464bd04",
"text": "A compact multiband antenna is proposed that consists of a printed circular disc monopole antenna with an L-shaped slot cut out of the ground, forming a defected ground plane. Analysis of the current distribution on the antenna reveals that at low frequencies the addition of the slot creates two orthogonal current paths, which are responsible for two additional resonances in the response of the antenna. By virtue of the orthogonality of these modes the antenna exhibits orthogonal pattern diversity, while enabling the adjacent resonances to be merged, forming a wideband low-frequency response and maintaining the inherent wideband high-frequency response of the monopole. The antenna exhibits a measured -10 dB S 11 bandwidth of 600 MHz from 2.68 to 3.28 GHz, and a bandwidth of 4.84 GHz from 4.74 to 9.58 GHz, while the total size of the antenna is only 24 times 28.3 mm. The efficiency is measured using a modified Wheeler cap method and is verified using the gain comparison method to be approximately 90% at both 2.7 and 5.5 GHz.",
"title": ""
}
] |
scidocsrr
|
55b04e302617ae736e974e365ca8da70
|
COCA: Computation Offload to Clouds Using AOP
|
[
{
"docid": "74227709f4832c3978a21abb9449203b",
"text": "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor.",
"title": ""
}
] |
[
{
"docid": "3c695b12b47f358012f10dc058bf6f6a",
"text": "This paper addresses the problem of classifying places in the environment of a mobile robot into semantic categories. We believe that semantic information about the type of place improves the capabilities of a mobile robot in various domains including localization, path-planning, or human-robot interaction. Our approach uses AdaBoost, a supervised learning algorithm, to train a set of classifiers for place recognition based on laser range data. In this paper we describe how this approach can be applied to distinguish between rooms, corridors, doorways, and hallways. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various environments.",
"title": ""
},
{
"docid": "3b8817b9838374ec58f75f43fbcf209c",
"text": "Background. Breastfeeding is the optimal method for achieving a normal growth and development of the baby. This study aimed to study mothers' perceptions and practices regarding breastfeeding in Mangalore, India. Methodology. A cross-sectional study of 188 mothers was conducted using a structured proforma. Results. Importance of breast feeding was known to most mothers. While initiation of breast feeding within one hour of birth was done by majority of mothers, few had discarded colostrum and adopted prelacteal feeding. Mothers opined that breast feeding is healthy for their babies (96.3%) and easier than infant feeding (79.8%), does not affect marital relationship (51%), and decreases family expenditure (61.1%). However, there were poor perceptions regarding the advantages of breast milk with respect to nutritive value, immune effect, and disease protection. Few respondents reported discontinuation of breastfeeding in previous child if the baby had fever/cold (6%) or diarrhea (18%) and vomiting (26%). There was a statistically significant association between mother's educational level and perceived importance of breastfeeding and also between the mode of delivery and initiation of breast feeding (p < 0.05). Conclusion. Importance of breast feeding was known to most mothers. Few perceptions related to breast milk and feeding along with myths and disbeliefs should be rectified by health education.",
"title": ""
},
{
"docid": "3efaaabf9a93460bace2e70abc71801d",
"text": "BACKGROUND\nNumerous studies report an association between social support and protection from depression, but no systematic review or meta-analysis exists on this topic.\n\n\nAIMS\nTo review systematically the characteristics of social support (types and source) associated with protection from depression across life periods (childhood and adolescence; adulthood; older age) and by study design (cross-sectional v cohort studies).\n\n\nMETHOD\nA systematic literature search conducted in February 2015 yielded 100 eligible studies. Study quality was assessed using a critical appraisal checklist, followed by meta-analyses.\n\n\nRESULTS\nSources of support varied across life periods, with parental support being most important among children and adolescents, whereas adults and older adults relied more on spouses, followed by family and then friends. Significant heterogeneity in social support measurement was noted. Effects were weaker in both magnitude and significance in cohort studies.\n\n\nCONCLUSIONS\nKnowledge gaps remain due to social support measurement heterogeneity and to evidence of reverse causality bias.",
"title": ""
},
{
"docid": "76d297fe81d50d9efa170fb033f3e0df",
"text": "In recent years, many companies have developed various distributed computation frameworks for processing machine learning (ML) jobs in clusters. Networking is a well-known bottleneck for ML systems and the cluster demands efficient scheduling for huge traffic (up to 1GB per flow) generated by ML jobs. Coflow has been proven an effective abstraction to schedule flows of such data-parallel applications. However, the implementation of coflow scheduling policy is constrained when coflow characteristics are unknown a prior, and when TCP congestion control misinterprets the congestion signal leading to low throughput. Fortunately, traffic patterns experienced by some ML jobs support to speculate the complete coflow characteristic with limited information. Hence this paper summarizes coflow from these ML jobs as self-similar coflow and proposes a decentralized self-similar coflow scheduler Cicada. Cicada assigns each coflow a probe flow to speculate its characteristics during the transportation and employs the Shortest Job First (SJF) to separate coflow into strict priority queues based on the speculation result. To achieve full bandwidth for throughput- sensitive ML jobs, and to guarantee the scheduling policy implementation, Cicada promotes the elastic transport-layer rate control that outperforms prior works. Large-scale simulations show that Cicada completes coflow 2.08x faster than the state-of-the-art schemes in the information-agnostic scenario.",
"title": ""
},
{
"docid": "b19630c809608601948a7f16910396f7",
"text": "This paper presents a novel, smart and portable active knee rehabilitation orthotic device (AKROD) designed to train stroke patients to correct knee hyperextension during stance and stiff-legged gait (defined as reduced knee flexion during swing). The knee brace provides variable damping controlled in ways that foster motor recovery in stroke patients. A resistive, variable damper, electro-rheological fluid (ERF) based component is used to facilitate knee flexion during stance by providing resistance to knee buckling. Furthermore, the knee brace is used to assist in knee control during swing, i.e. to allow patients to achieve adequate knee flexion for toe clearance and adequate knee extension in preparation to heel strike. The detailed design of AKROD, the first prototype built, closed loop control results and initial human testing are presented here",
"title": ""
},
{
"docid": "97a6a77cfa356636e11e02ffe6fc0121",
"text": "© 2019 Muhammad Burhan Hafez et al., published by De Gruyter. This work is licensed under the Creative CommonsAttribution-NonCommercial-NoDerivs4.0License. Paladyn, J. Behav. Robot. 2019; 10:14–29 Research Article Open Access Muhammad Burhan Hafez*, Cornelius Weber, Matthias Kerzel, and Stefan Wermter Deep intrinsically motivated continuous actor-critic for eflcient robotic visuomotor skill learning https://doi.org/10.1515/pjbr-2019-0005 Received June 6, 2018; accepted October 29, 2018 Abstract: In this paper, we present a new intrinsically motivated actor-critic algorithm for learning continuous motor skills directly from raw visual input. Our neural architecture is composed of a critic and an actor network. Both networks receive the hidden representation of a deep convolutional autoencoder which is trained to reconstruct the visual input, while the centre-most hidden representation is also optimized to estimate the state value. Separately, an ensemble of predictive world models generates, based on its learning progress, an intrinsic reward signal which is combined with the extrinsic reward to guide the exploration of the actor-critic learner. Our approach is more data-efficient and inherently more stable than the existing actor-critic methods for continuous control from pixel data. We evaluate our algorithm for the task of learning robotic reaching and grasping skills on a realistic physics simulator and on a humanoid robot. The results show that the control policies learnedwith our approach can achieve better performance than the compared state-of-the-art and baseline algorithms in both dense-reward and challenging sparse-reward settings.",
"title": ""
},
{
"docid": "eb83222ce7180fe3039c00eeb8600d2f",
"text": "Cloud-assisted video streaming has emerged as a new paradigm to optimize multimedia content distribution over the Internet. This article investigates the problem of streaming cloud-assisted real-time video to multiple destinations (e.g., cloud video conferencing, multi-player cloud gaming, etc.) over lossy communication networks. The user diversity and network dynamics result in the delay differences among multiple destinations. This research proposes <underline>D</underline>ifferentiated cloud-<underline>A</underline>ssisted <underline>VI</underline>deo <underline>S</underline>treaming (DAVIS) framework, which proactively leverages such delay differences in video coding and transmission optimization. First, we analytically formulate the optimization problem of joint coding and transmission to maximize received video quality. Second, we develop a quality optimization framework that integrates the video representation selection and FEC (Forward Error Correction) packet interleaving. The proposed DAVIS is able to effectively perform differentiated quality optimization for multiple destinations by taking advantage of the delay differences in cloud-assisted video streaming system. We conduct the performance evaluation through extensive experiments with the Amazon EC2 instances and Exata emulation platform. Evaluation results show that DAVIS outperforms the reference cloud-assisted streaming solutions in video quality and delay performance.",
"title": ""
},
{
"docid": "1ed93d114804da5714b7b612f40e8486",
"text": "Volleyball players are at high risk of overuse shoulder injuries, with spike biomechanics a perceived risk factor. This study compared spike kinematics between elite male volleyball players with and without a history of shoulder injuries. Height, mass, maximum jump height, passive shoulder rotation range of motion (ROM), and active trunk ROM were collected on elite players with (13) and without (11) shoulder injury history and were compared using independent samples t tests (P < .05). The average of spike kinematics at impact and range 0.1 s before and after impact during down-the-line and cross-court spike types were compared using linear mixed models in SPSS (P < .01). No differences were detected between the injured and uninjured groups. Thoracic rotation and shoulder abduction at impact and range of shoulder rotation velocity differed between spike types. The ability to tolerate the differing demands of the spike types could be used as return-to-play criteria for injured athletes.",
"title": ""
},
{
"docid": "7858fb4630f385d07e00cb5733e35c85",
"text": "Recommender system is used to recommend items and services to the users and provide recommendations based on prediction. The prediction performance plays vital role in the quality of recommendation. To improve the prediction performance, this paper proposed a new hybrid method based on naïve Bayesian classifier with Gaussian correction and feature engineering. The proposed method is experimented on the well known movie lens 100k data set. The results show better results when compared with existing methods.",
"title": ""
},
{
"docid": "2ed183563bd5cdaafa96b03836883730",
"text": "This is an introduction to the Classic Paper on MOSFET scaling by R. Dennardet al., “Design of Ion-Implanted MOSFET’s with Very Small Physical Dimensions,” published in the IEEE Journal of Solid-State Circuitsin October 1974. The history of scaling and its application to very large scale integration (VLSI) MOSFET technology is traced from 1970 to 1998. The role of scaling in the profound improvements in power delay product over the last three decades is analyzed in basic terms.",
"title": ""
},
{
"docid": "75642d6a79f6b9bb8b02f6d8ded6a370",
"text": "Spectral indices as a selection tool in plant breeding could improve genetic gains for different important traits. The objectives of this study were to assess the potential of using spectral reflectance indices (SRI) to estimate genetic variation for in-season biomass production, leaf chlorophyll, and canopy temperature (CT) in wheat (Triticum aestivum L.) under irrigated conditions. Three field experiments, GHIST (15 CIMMYT globally adapted historic genotypes), RILs1 (25 recombinant inbred lines [RILs]), and RILs2 (36 RILs) were conducted under irrigated conditions at the CIMMYT research station in northwest Mexico in three different years. Five SRI were evaluated to differentiate genotypes for biomass production. In general, genotypic variation for all the indices was significant. Near infrared radiation (NIR)–based indices gave the highest levels of associationwith biomass production and the higher associations were observed at heading and grainfilling, rather than at booting. Overall, NIR-based indices were more consistent and differentiated biomass more effectively compared to the other indices. Indices based on ratio of reflection spectra correlatedwith SPADchlorophyll values, and the associationwas stronger at the generative growth stages. These SRI also successfully differentiated the SPAD values at the genotypic level. The NIR-based indices showed a strong and significant association with CT at the heading and grainfilling stages. These results demonstrate the potential of using SRI as a breeding tool to select for increased genetic gains in biomass and chlorophyll content, plus for cooler canopies. SIGNIFICANT PROGRESS in grain yield of spring wheat under irrigated conditions has been made through the classical breeding approach (Slafer et al., 1994), even though the genetic basis of yield improvement in wheat is not well established (Reynolds et al., 1999). Several authors have reported that progress in grain yield is mainly attributed to better partitioning of photosynthetic products (Waddington et al., 1986; Calderini et al., 1995; Sayre et al., 1997). The systematic increase in the partitioning of assimilates (harvest index) has a theoretical upper limit of approximately 60% (Austin et al., 1980). Further yield increases in wheat through improvement in harvest index will be limited without a further increase in total crop biomass (Austin et al., 1980; Slafer and Andrade, 1991; Reynolds et al., 1999). Though until relatively recently biomass was not commonly associated with yield gains, increases in biomass of spring wheat have been reported (Waddington et al., 1986; Sayre et al., 1997) and more recently in association with yield increases (Singh et al., 1998; Reynolds et al., 2005; Shearman et al., 2005). Thus, a breeding approach is needed that will select genotypes with higher biomass capacity, while maintaining the high partitioning rate of photosynthetic products. Direct estimation of biomass is a timeand laborintensive undertaking. Moreover, destructive in-season sampling involves large sampling errors (Whan et al., 1991) and reduces the final area for estimation of grain yield and final biomass. Regan et al. (1992) demonstrated a method to select superior genotypes of spring wheat for early vigor under rainfed conditions using a destructive sampling technique, but such sampling is impossible for breeding programs where a large number of genotypes are being screened for various desirable traits. Spectral reflectance indices are a potentially rapid technique that could assess biomass at the genotypic level without destructive sampling (Elliott and Regan, 1993; Smith et al., 1993; Bellairs et al., 1996; Peñuelas et al., 1997). Canopy light reflectance properties based mainly on the absorption of light at a specific wavelength are associated with specific plant characteristics. The spectral reflectance in the visible (VIS) wavelengths (400–700 nm) depends on the absorption of light by leaf chlorophyll and associated pigments such as carotenoid and anthocyanins. The reflectance of the VIS wavelengths is relatively low because of the high absorption of light energy by these pigments. In contrast, the reflectance of theNIR wavelengths (700–1300 nm) is high, since it is not absorbed by plant pigments and is scattered by plant tissue at different levels in the canopy, such that much of it is reflected back rather than being absorbed by the soil (Knipling, 1970). Spectral reflectance indices were developed on the basis of simple mathematical formula, such as ratios or differences between the reflectance at given wavelengths (Araus et al., 2001). Simple ratio (SR 5 NIR/VIS) and the normalized difference vegetation M.A. Babar, A.R. Klatt, and W.R. Raun, Department of Plant and Soil Sciences, 368 Ag. Hall, Oklahoma State University, Stillwater, OK 74078, USA; M.P. Reynolds, International Maize and Wheat Improvement Center (CIMMYT), Km. 45, Carretera Mexico, El Batan, Texcoco, Mexico; M. van Ginkel, Department of Primary Industries (DPI), Private Bag 260, Horsham, Victoria, Postcode: 3401, DX Number: 216515, Australia; M.L. Stone, Department of Biosystems and Agricultural Engineering, Oklahoma State University, Stillwater, OK 74078, USA. This research was partially funded by the Oklahoma Wheat Research Foundation (OWRF), Oklahoma Wheat Commission, and CIMMYT (International Maize and Wheat Improvement Center), Mexico. Received 11 Mar. 2005. *Corresponding author ([email protected]). Published in Crop Sci. 46:1046–1057 (2006). Crop Breeding & Genetics doi:10.2135/cropsci2005.0211 a Crop Science Society of America 677 S. Segoe Rd., Madison, WI 53711 USA Abbreviations: CT, canopy temperature; CTD, canopy temperature depression; GHIST, global historic; NDVI, normalized difference vegetation index; NIR, near infrared radiation; NWI-1, normalized water index-1; NWI-2, normalized water index-2; PSSRa, pigment specific simple ratio-chlorophyll a; RARSa, ratio analysis of reflectance spectra-chlorophyll a; RARSb, ratio analysis of reflectance spectra-chlorophyll b; RARSc, ratio analysis of reflectance spectracarotenoids; RILs, recombinant inbred lines; SR, simple ratio; SRI, spectral reflectance indices; WI, water index. R e p ro d u c e d fr o m C ro p S c ie n c e . P u b lis h e d b y C ro p S c ie n c e S o c ie ty o f A m e ri c a . A ll c o p y ri g h ts re s e rv e d . 1046 Published online March 27, 2006",
"title": ""
},
{
"docid": "b4910e355c44077eb27c62a0c8237204",
"text": "Our proof is built on Perron-Frobenius theorem, a seminal work in matrix theory (Meyer 2000). By Perron-Frobenius theorem, the power iteration algorithm for predicting top K persuaders converges to a unique C and this convergence is independent of the initialization of C if the persuasion probability matrix P is nonnegative, irreducible, and aperiodic (Heath 2002). We first show that P is nonnegative. Each component of the right hand side of Equation (10) is positive except nD $ 0; thus, persuasion probability pij estimated with Equation (10) is positive, for all i, j = 1, 2, ..., n and i ... j. Because all diagonal elements of P are equal to zero and all non-diagonal elements of P are positive persuasion probabilities, P is nonnegative.",
"title": ""
},
{
"docid": "bc9666dbfd3d7eea16ee5793c883eb4c",
"text": "This work introduces VRNN-BPR, a novel deep learning model, which is utilized in sessionbased Recommender systems tackling the data sparsity problem. The proposed model combines a Recurrent Neural Network with an amortized variational inference setup (AVI) and a Bayesian Personalized Ranking in order to produce predictions on sequence-based data and generate recommendations. The model is assessed using a large real-world dataset and the results demonstrate its superiority over current state-of-the-art techniques.",
"title": ""
},
{
"docid": "041b308fe83ac9d5a92e33fd9c84299a",
"text": "Spaceborne synthetic aperture radar systems are severely constrained to a narrow swath by ambiguity limitations. Here a vertically scanned-beam synthetic aperture system (SCANSAR) is proposed as a solution to this problem. The potential length of synthetic aperture must be shared between beam positions, so the along-track resolution is poorer; a direct tradeoff exists between resolution and swath width. The length of the real aperture is independently traded against the number of scanning positions. Design curves and equations are presented for spaceborne SCANSARs for altitudes between 400 and 1400 km and inner angles of incidence between 20° and 40°. When the real antenna is approximately square, it may also be used for a microwave radiometer. The combined radiometer and synthetic-aperture (RADISAR) should be useful for those applications where the poorer resolution of the radiometer is useful for some purposes, but the finer resolution of the radar is needed for others.",
"title": ""
},
{
"docid": "ac7b607cc261654939868a62822a58eb",
"text": "Interdigitated capacitors (IDC) are extensively used for a variety of chemical and biological sensing applications. Printing and functionalizing these IDC sensors on bendable substrates will lead to new innovations in healthcare and medicine, food safety inspection, environmental monitoring, and public security. The synthesis of an electrically conductive aqueous graphene ink stabilized in deionized water using the polymer Carboxymethyl Cellulose (CMC) is introduced in this paper. CMC is a nontoxic hydrophilic cellulose derivative used in food industry. The water-based graphene ink is then used to fabricate IDC sensors on mechanically flexible polyimide substrates. The capacitance and frequency response of the sensors are analyzed, and the effect of mechanical stress on the electrical properties is examined. Experimental results confirm low thin film resistivity (~6;.6×10-3 Ω-cm) and high capacitance (>100 pF). The printed sensors are then used to measure water content of ethanol solutions to demonstrate the proposed conductive ink and fabrication methodology for creating chemical sensors on thin membranes.",
"title": ""
},
{
"docid": "bd590555337d3ada2c641c5f1918cf2c",
"text": "Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.",
"title": ""
},
{
"docid": "ce791426ecd9e110f56f1d3d221419c9",
"text": "Software bugs can cause significant financial loss and even the loss of human lives. To reduce such loss, developers devote substantial efforts to fixing bugs, which generally requires much expertise and experience. Various approaches have been proposed to aid debugging. An interesting recent research direction is automatic program repair, which achieves promising results, and attracts much academic and industrial attention. However, people also cast doubt on the effectiveness and promise of this direction. A key criticism is to what extent such approaches can fix real bugs. As only research prototypes for these approaches are available, it is infeasible to address the criticism by evaluating them directly on real bugs. Instead, in this paper, we design and develop BugStat, a tool that extracts and analyzes bug fixes. With BugStat's support, we conduct an empirical study on more than 9,000 real-world bug fixes from six popular Java projects. Comparing the nature of manual fixes with automatic program repair, we distill 15 findings, which are further summarized into four insights on the two key ingredients of automatic program repair: fault localization and faulty code fix. In addition, we provide indirect evidence on the size of the search space to fix real bugs and find that bugs may also reside in non-source files. Our results provide useful guidance and insights for improving the state-of-the-art of automatic program repair.",
"title": ""
},
{
"docid": "4c2223d141f6c9811f31c5da80d61a64",
"text": "Improvement of blast-induced fragmentation and crusher efficiency by means of optimized drilling and blasting in Aitik. ACKNOWLEDGMENTS The thesis project presented in this report was conducted in Boliden's Aitik mine; thereby I wish to gratefully thank Boliden Mines for their financial and technical support. I would like to express my very great appreciation to Ulf Nyberg, my supervisor at Luleå University of Technology, and Evgeny Novikov, my supervisor in Boliden for their patient guidance, technical support and valuable suggestions on this project. Useful advice given by Dr. Daniel Johansson is also greatly appreciated; I wish to acknowledge the constructive recommendations provided by Nikolaos Petropoulos as well. My special thanks are extended to the staff of Boliden Mines for all their help and technical support in Aitik. I am particularly grateful for the assistance given by Torbjörn Krigsman, Nils Johansson and Peter Palo. I would also like to acknowledge the help provided collection in Aitik mine. I would also like to thank Forcit company for their assistance with the collection of the data, my special thanks goes to Per-Arne Kortelainen for all his contribution. Finally my deep gratitude goes to my parents for their invaluable support, patience and encouragement throughout my academic studies. SUMMARY Rock blasting is one of the most dominating operations in open pit mining efficiency. As many downstream processes depend on the blast-induced fragmentation, an optimized blasting strategy can influence the total revenue of a mine to a large extent. Boliden Aitik mine in northern Sweden is one of the largest copper mines in Europe. The annual production of the mine is expected to reach 36 million tonnes of ore in 2014; so continuous efforts are being made to boost the production. Highly automated equipment and new processing plant, in addition to new crushers, have sufficient capacity to reach the production goals; the current obstacle in the process of production increase is a bottleneck in crushers caused by oversize boulders. Boulders require extra efforts for secondary blasting or hammer breakage and if entered the crushers, they cause downtimes. Therefore a more evenly distributed fragmentation with less oversize material can be advantageous. Furthermore, a better fragmentation can cause a reduction in energy costs by demanding less amounts of crushing energy. In order to achieve a more favorable fragmentation, two alternative blast designs in addition to a reference design were tested and the results were evaluated and compared to the …",
"title": ""
},
{
"docid": "607cd26b9c51b5b52d15087d0e6662cb",
"text": "Pseudo-NMOS level-shifters consume large static current making them unsuitable for portable devices implemented with HV CMOS. Dynamic level-shifters help reduce power consumption. To reduce on-current to a minimum (sub-nanoamp), modifications are proposed to existing pseudo-NMOS and dynamic level-shifter circuits. A low power three transistor static level-shifter design with a resistive load is also presented.",
"title": ""
},
{
"docid": "f46ae26ef53a692985c2e7dc39cef13b",
"text": "Assisting hip extension with a tethered exosuit and a simulation-optimized force profile reduces metabolic cost of running.",
"title": ""
}
] |
scidocsrr
|
71ab0493c8a0dc97c8ae31eac2d7c7f5
|
High-level synthesis of dynamic data structures: A case study using Vivado HLS
|
[
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
},
{
"docid": "cd1cfbdae08907e27a4e1c51e0508839",
"text": "High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.",
"title": ""
}
] |
[
{
"docid": "8ba192226a3c3a4f52ca36587396e85c",
"text": "For many years I have been engaged in psychotherapy with individuals in distress. In recent years I have found myself increasingly concerned with the process of abstracting from that experience the general principles which appear to be involved in it. I have endeavored to discover any orderliness, any unity which seems to inhere in the subtle, complex tissue of interpersonal relationship in which I have so constantly been immersed in therapeutic work. One of the current products of this concern is an attempt to state, in formal terms, a theory of psychotherapy, of personality, and of interpersonal relationships which will encompass and contain the phenomena of my experience. What I wish to do in this paper is to take one very small segment of that theory, spell it out more completely, and explore its meaning and usefulness.",
"title": ""
},
{
"docid": "44934f07118f7ec619c7e165cdf9d797",
"text": "The American Heart Association (AHA) has had a longstanding commitment to provide information about the role of nutrition in cardiovascular disease (CVD) risk reduction. Many activities have been and are currently directed toward this objective, including issuing AHA Dietary Guidelines periodically (most recently in 20001) and Science Advisories and Statements on an ongoing basis to review emerging nutrition-related issues. The objective of the AHA Dietary Guidelines is to promote healthful dietary patterns. A consistent focus since the inception of the AHA Dietary Guidelines has been to reduce saturated fat (and trans fat) and cholesterol intake, as well as to increase dietary fiber consumption. Collectively, all the AHA Dietary Guidelines have supported a dietary pattern that promotes the consumption of diets rich in fruits, vegetables, whole grains, low-fat or nonfat dairy products, fish, legumes, poultry, and lean meats. This dietary pattern has a low energy density to promote weight control and a high nutrient density to meet all nutrient needs. As reviewed in the first AHA Science Advisory2 on antioxidant vitamins, epidemiological and population studies reported that some micronutrients may beneficially affect CVD risk (ie, antioxidant vitamins such as vitamin E, vitamin C, and -carotene). Recent epidemiological evidence3 is consistent with the earlier epidemiological and population studies (reviewed in the first Science Advisory).2 These findings have been supported by in vitro studies that have established a role of oxidative processes in the development of the atherosclerotic plaque. Underlying the atherosclerotic process are proatherogenic and prothrombotic oxidative events in the artery wall that may be inhibited by antioxidants. The 1999 AHA Science Advisory2 recommended that the general population consume a balanced diet with emphasis on antioxidant-rich fruits, vegetables, and whole grains, advice that was consistent with the AHA Dietary Guidelines at the time. In the absence of data from randomized, controlled clinical trials, no recommendations were made with regard to the use of antioxidant supplements. In the past 5 years, a number of controlled clinical studies have reported the effects of antioxidant vitamin and mineral supplements on CVD risk (see Tables 1 through 3).4–21 These studies have been the subject of several recent reviews22–26 and formed the database for the present article. In general, the studies presented in the tables differ with regard to subject populations studied, type and dose of antioxidant/cocktail administered, length of study, and study end points. Overall, the studies have been conducted on post–myocardial infarction subjects or subjects at high risk for CVD, although some studied healthy subjects. In addition to dosage differences in vitamin E studies, some trials used the synthetic form, whereas others used the natural form of the vitamin. With regard to the other antioxidants, different doses were administered (eg, for -carotene and vitamin C). The antioxidant cocktail formulations used also varied. Moreover, subjects were followed up for at least 1 year and for as long as 12 years. In addition, a meta-analysis of 15 studies (7 studies of vitamin E, 50 to 800 IU; 8 studies of -carotene, 15 to 50 mg) with 1000 or more subjects per trial has been conducted to ascertain the effects of antioxidant vitamins on cardiovascular morbidity and mortality.27 Collectively, for the most part, clinical trials have failed to demonstrate a beneficial effect of antioxidant supplements on CVD morbidity and mortality. With regard to the meta-analysis, the lack of efficacy was demonstrated consistently for different doses of various antioxidants in diverse population groups. Although the preponderance of clinical trial evidence has not shown beneficial effects of antioxidant supplements, evidence from some smaller studies documents a benefit of -tocopherol (Cambridge Heart AntiOxidant Study,13 Secondary Prevention with Antioxidants of Cardiovascular disease in End-stage renal disease study),15 -tocopherol and slow-release vitamin C (Antioxidant Supplementation in Atherosclerosis Prevention study),16 and vitamin C plus vitamin E (Intravascular Ultrasonography Study)17 on cardio-",
"title": ""
},
{
"docid": "54327e52ad52e1b7a6ead7c1afe4a6d5",
"text": "Implementation of smart grid provides an opportunity for concurrent implementation of nonintrusive appliance load monitoring (NIALM), which disaggregates the total household electricity data into data on individual appliances. This paper introduces a new disaggregation algorithm for NIALM based on a modified Viterbi algorithm. This modification takes advantage of the sparsity of transitions between appliances' states to decompose the main algorithm, thus making the algorithm complexity linearly proportional to the number of appliances. By consideration of a series of data and integrating a priori information, such as the frequency of use and time on/time off statistics, the algorithm dramatically improves NIALM accuracy as compared to the accuracy of established NIALM algorithms.",
"title": ""
},
{
"docid": "30f48021bca12899d6f2e012e93ba12d",
"text": "There are several locomotion mechanisms in Nature. The study of mechanics of any locomotion is very useful for scientists and researchers. Many locomotion principles from Nature have been adapted in robotics. There are several species which are capable of multimode locomotion such as walking and swimming, and flying etc. Frogs are such species, capable of jumping, walking, and swimming. Multimode locomotion is important for robots to work in unknown environment. Frogs are widely known as good multimode locomotors. Webbed feet help them to swim efficiently in water. This paper presents the study of frog's swimming locomotion and adapting the webbed feet for swimming locomotion of the robots. A simple mechanical model of robotic leg with webbed foot, which can be used for multi-mode locomotion and robotic frog, is put forward. All the joints of the legs are designed to be driven by tendon-pulley arrangement with the actuators mounted on the body, which allows the legs to be lighter and compact.",
"title": ""
},
{
"docid": "b715631367001fb60b4aca9607257923",
"text": "This paper describes a new predictive algorithm that can be used for programming large arrays of analog computational memory elements within 0.2% of accuracy for 3.5 decades of currents. The average number of pulses required are 7-8 (20 mus each). This algorithm uses hot-electron injection for accurate programming and Fowler-Nordheim tunneling for global erase. This algorithm has been tested for programming 1024times16 and 96times16 floating-gate arrays in 0.25 mum and 0.5 mum n-well CMOS processes, respectively",
"title": ""
},
{
"docid": "ebe14e601d0b61f10f6674e2d7108d41",
"text": "In this letter, the design procedure and electrical performance of a dual band (2.4/5.8GHz) printed dipole antenna using spiral structure are proposed and investigated. For the first time, a dual band printed dipole antenna with spiral configuration is proposed. In addition, a matching method by adjusting the transmission line width, and a new bandwidth broadening method varying the distance between the top and bottom spirals are reported. The operating frequencies of the proposed antenna are 2.4GHz and 5.8GHz which cover WLAN system. The proposed antenna achieves a good matching using tapered transmission lines for the top and bottom spirals. The desired resonant frequencies are obtained by adjusting the number of turns of the spirals. The bandwidth is optimized by varying the distance between the top and bottom spirals. A relative position of the bottom spiral plays an important role in achieving a bandwidth in terms of 10-dB return loss.",
"title": ""
},
{
"docid": "7957742cd5da5a720446ae9af185df65",
"text": "Data Mining ist ein Prozess, bei dem mittels statistischer Verfahren komplexe Muster in meist großen Mengen von Daten gesucht werden. Damit dieser von Organisationen verstärkt zur Entscheidungsunterstützung eingesetzt werden kann, wäre es hilfreich, wenn Domänenexperten durch Self-Service-Anwendungen in die Lage versetzt würden, diese Form der Analysen eigenständig durchzuführen, damit sie nicht mehr auf Datenwissenschaftler und IT-Fachkräfte angewiesen sind. In diesem Artikel soll eine Versuchsreihe vorgestellt werden, die eine Bewertung darüber ermöglicht, wie geeignet etablierte Data-MiningSoftwareplattformen (IBM SPSS Modeler, KNIME, RapidMiner und WEKA) sind, um sie Gelegenheitsanwendern zur Verfügung zu stellen. In den vorgestellten Versuchen sollen Entscheidungsbäume im Fokus stehen, eine besonders einfache Form von Algorithmen, die der Literatur und unserer Erfahrung nach am ehesten für die Nutzung in Self-Service-Data-Mining-Anwendungen geeignet sind. Dabei werden mithilfe eines einheitlichen Datensets auf den verschiedenen Plattformen Entscheidungsbäume für identische Zielvariablen konstruiert. Die Ergebnisse sind im Hinblick auf die Klassifikationsgenauigkeit zwar relativ ähnlich, die Komplexität der Modelle variiert jedoch. Aktuelle grafische Benutzeroberflächen lassen sich zwar auch ohne tiefgehende Kompetenzen in den Bereichen Informatik und Statistik bedienen, sie ersetzen aber nicht den Bedarf an datenwissenschaftlichen Kompetenzen, die besonders beim Schritt der Datenvorbereitung zum Einsatz kommen, welcher den größten Teil des Data-Mining-Prozesses ausmacht.",
"title": ""
},
{
"docid": "f782af034ef46a15d89637a43ad2849c",
"text": "Introduction: Evidence-based treatment of abdominal hernias involves the use of prosthetic mesh. However, the most commonly used method of treatment of diastasis of the recti involves plication with non-absorbable sutures as part of an abdominoplasty procedure. This case report describes single-port laparoscopic repair of diastasis of recti and umbilical hernia with prosthetic mesh after plication with slowly absorbable sutures combined with abdominoplasty. Technique Description: Our patient is a 36-year-old woman with severe diastasis of the recti, umbilical hernia and an excessive amount of redundant skin after two previous pregnancies and caesarean sections. After raising the upper abdominal flap, a single-port was placed in the left upper quadrant and the ligamenturn teres was divided. The diastasis of the recti and umbilical hernia were plicated under direct vision with continuous and interrupted slowly absorbable sutures before an antiadhesive mesh was placed behind the repair with 6 cm overlap, transfixed in 4 quadrants and tacked in place with non-absorbable tacks in a double-crown technique. The left upper quadrant wound was closed with slowly absorbable sutures. The excess skin was removed and fibrin sealant was sprayed in the subcutaneous space to minimize the risk of serorna formation without using drains. Discussion: Combining single-port laparoscopic repair of diastasis of recti and umbilical hemia repair minimizes inadvertent suturing of abdominal contents during plication, the risks of port site hernias associated with conventional multipart repair and permanently reinforced the midline weakness while achieving “scarless” surgery.",
"title": ""
},
{
"docid": "a5cb288b5a2f29c22a9338be416a27f7",
"text": "L ^ N C O U R A G I N G CHILDREN'S INTRINSIC MOTIVATION CAN HELP THEM TO ACHIEVE ACADEMIC SUCCESS (ADELMAN, 1978; ADELMAN & TAYLOR, 1986; GOTTFRIED, 1 9 8 3 , 1 9 8 5 ) . TO HELP STUDENTS WITH AND WITHOUT LEARNING DISABILITIES TO DEVELOP ACADEMIC INTRINSIC MOTIVATION, IT IS IMPORTANT TO DEFINE THE FACTORS THAT AFFECT MOTIVATION (ADELMAN & CHANEY, 1 9 8 2 ; ADELMAN & TAYLOR, 1983). T H I S ARTICLE OFFERS EDUCATORS AN INSIGHT INTO THE EFFECTS OF DIFFERENT MOTIVATIONAL ORIENTATIONS ON THE SCHOOL LEARNING OF STUDENTS WITH LEARNING DISABILITIES, AS W E L L AS INTO THE VARIABLES AFFECTING INTRINSIC AND EXTRINSIC MOTIVATION. ALSO INCLUDED ARE RECOMMENDATIONS, BASED ON EMPIRICAL EVIDENCE, FOR ENHANCING ACADEMIC INTRINSIC MOTIVATION IN LEARNERS OF VARYING ABIL IT IES AT A L L GRADE LEVELS. I .NTEREST IN THE VARIOUS ASPECTS OF INTRINSIC and extrinsic motivation has accelerated in recent years. Motivational orientation is considered to be an important factor in determining the academic success of children with and without disabilities (Adelman & Taylor, 1986; Calder & Staw, 1975; Deci, 1975; Deci & Chandler, 1986; Schunk, 1991). Academic intrinsic motivation has been found to be significantly correlated with academic achievement in students with learning disabilities (Gottfried, 1985) and without learning disabilities (Adelman, 1978; Adelman & Taylor, 1983). However, children with learning disabilities (LD) are less likely than their nondisabled peers to be intrinsically motivated (Adelman & Chaney, 1982; Adelman & Taylor, 1986; Mastropieri & Scruggs, 1994; Smith, 1994). Students with LD have been found to have more positive attitudes toward school than toward school learning (Wilson & David, 1994). Wilson and David asked 89 students with LD to respond to items on the School Attitude Measures (SAM; Wick, 1990) and on the Children's Academic Intrinsic Motivation Inventory (CAIMI; Gottfried, 1986). The students with L D were found to have a more positive attitude toward the school environment than toward academic tasks. Research has also shown that students with LD may derive their self-perceptions from areas other than school, and do not see themselves as less competent in areas of school learning (Grolnick & Ryan, 1990). Although there is only a limited amount of research available on intrinsic motivation in the population with special needs (Adelman, 1978; Adelman & Taylor, 1986; Grolnick & Ryan, 1990), there is an abundance of research on the general school-age population. This article is an at tempt to use existing research to identify variables pertinent to the academic intrinsic motivation of children with learning disabilities. The first part of the article deals with the definitions of intrinsic and extrinsic motivation. The next part identifies some of the factors affecting the motivational orientation and subsequent academic achievement of school-age children. This is followed by empirical evidence of the effects of rewards on intrinsic motivation, and suggestions on enhancing intrinsic motivation in the learner. At the end, several strategies are presented that could be used by the teacher to develop and encourage intrinsic motivation in children with and without LD. l O R E M E D I A L A N D S P E C I A L E D U C A T I O N Volume 18. Number 1, January/February 1997, Pages 12-19 D E F I N I N G M O T I V A T I O N A L A T T R I B U T E S Intrinsic Motivation Intrinsic motivation has been defined as (a) participation in an activity purely out of curiosity, that is, from a need to know more about something (Deci, 1975; Gottfried, 1983; Woolfolk, 1990); (b) the desire to engage in an activity purely for the sake of participating in and completing a task (Bates, 1979; Deci, Vallerand, Pelletier, & Ryan, 1991); and (c) the desire to contribute (Mills, 1991). Academic intrinsic motivation has been measured by (a) the ability of the learner to persist with the task assigned (Brophy, 1983; Gottfried, 1983); (b) the amount of time spent by the student on tackling the task (Brophy, 1983; Gottfried, 1983); (c) the innate curiosity to learn (Gottfried, 1983); (d) the feeling of efficacy related to an activity (Gottfried, 1983; Schunk, 1991; Smith, 1994); (e) the desire to select an activity (Brophy, 1983); and (f) a combination of all these variables (Deci, 1975; Deci & Ryan, 1985). A student who is intrinsically motivated will persist with the assigned task, even though it may be difficult (Gottfried, 1983; Schunk, 1990), and will not need any type of reward or incentive to initiate or complete a task (Beck, 1978; Deci, 1975; Woolfolk, 1990). This type of student is more likely to complete the chosen task and be excited by the challenging nature of an activity. The intrinsically motivated student is also more likely to retain the concepts learned and to feel confident about tackling unfamiliar learning situations, like new vocabulary words. However, the amount of interest generated by the task also plays a role in the motivational orientation of the learner. An assigned task with zero interest value is less likely to motivate the student than is a task that arouses interest and curiosity. Intrinsic motivation is based in the innate, organismic needs for competence and self-determination (Deci & Ryan, 1985; Woolfolk, 1990), as well as the desire to seek and conquer challenges (Adelman & Taylor, 1990). People are likely to be motivated to complete a task on the basis of their level of interest and the nature of the challenge. Research has suggested that children with higher academic intrinsic motivation function more effectively in school (Adelman & Taylor, 1990; Boggiano & Barrett, 1992; Gottfried, 1990; Soto, 1988). Besides innate factors, there are several other variables that can affect intrinsic motivation. Extrinsic Motivation Adults often give the learner an incentive to participate in or to complete an activity. The incentive might be in the form of a tangible reward, such as money or candy. Or, it might be the likelihood of a reward in the future, such as a good grade. Or, it might be a nontangible reward, for example, verbal praise or a pat on the back. The incentive might also be exemption from a less liked activity or avoidance of punishment. These incentives are extrinsic motivators. A person is said to be extrinsically motivated when she or he undertakes a task purely for the sake of attaining a reward or for avoiding some punishment (Adelman & Taylor, 1990; Ball, 1984; Beck, 1978; Deci, 1975; Wiersma, 1992; Woolfolk, 1990). Extrinsic motivation can, especially in learning and other forms of creative work, interfere with intrinsic motivation (Benninga et al., 1991; Butler, 1989; Deci, 1975; McCullers, Fabes, & Moran, 1987). In such cases, it might be better not to offer rewards for participating in or for completing an activity, be it textbook learning or an organized play activity. Not only teachers but also parents have been found to negatively influence the motivational orientation of the child by providing extrinsic consequences contingent upon their school performance (Gottfried, Fleming, & Gottfried, 1994). The relationship between rewards (and other extrinsic factors) and the intrinsic motivation of the learner is outlined in the following sections. MOTIVATION AND THE LEARNER In a classroom, the student is expected to tackle certain types of tasks, usually with very limited choices. Most of the research done on motivation has been done in settings where the learner had a wide choice of activities, or in a free-play setting. In reality, the student has to complete tasks that are compulsory as well as evaluated (Brophy, 1983). Children are expected to complete a certain number of assignments that meet specified criteria. For example, a child may be asked to complete five multiplication problems and is expected to get correct answers to at least three. Teachers need to consider how instructional practices are designed from the motivational perspective (Schunk, 1990). Development of skills required for academic achievement can be influenced by instructional design. If the design undermines student ability and skill level, it can reduce motivation (Brophy, 1983; Schunk, 1990). This is especially applicable to students with disabilities. Students with LD have shown a significant increase in academic learning after engaging in interesting tasks like computer games designed to enhance learning (Adelman, Lauber, Nelson, & Smith, 1989). A common aim of educators is to help all students enhance their learning, regardless of the student's ability level. To achieve this outcome, the teacher has to develop a curriculum geared to the individual needs and ability levels of the students, especially the students with special needs. If the assigned task is within the child's ability level as well as inherently interesting, the child is very likely to be intrinsically motivated to tackle the task. The task should also be challenging enough to stimulate the child's desire to attain mastery. The probability of success or failure is often attributed to factors such as ability, effort, difficulty level of the task, R E M E D I A L A N D S P E C I A L E D U C A T I O N 1 O Volume 18, Number 1, January/February 1997 and luck (Schunk, 1990). One or more of these attributes might, in turn, affect the motivational orientation of a student. The student who is sure of some level of success is more likely to be motivated to tackle the task than one who is unsure of the outcome (Adelman & Taylor, 1990). A student who is motivated to learn will find school-related tasks meaningful (Brophy, 1983, 1987). Teachers can help students to maximize their achievement by adjusting the instructional design to their individual characteristics and motivational orientation. The personality traits and motivational tendency of learners with mild handicaps can either help them to compensate for their inadequate learning abilities and enhance performanc",
"title": ""
},
{
"docid": "f683ae3ae16041977f0d6644213de112",
"text": "Keywords: Wind turbine Fault prognosis Fault detection Pitch system ANFIS Neuro-fuzzy A-priori knowledge a b s t r a c t The fast growing wind industry has shown a need for more sophisticated fault prognosis analysis in the critical and high value components of a wind turbine (WT). Current WT studies focus on improving their reliability and reducing the cost of energy, particularly when WTs are operated offshore. WT Supervisory Control and Data Acquisition (SCADA) systems contain alarms and signals that could provide an early indication of component fault and allow the operator to plan system repair prior to complete failure. Several research programmes have been made for that purpose; however, the resulting cost savings are limited because of the data complexity and relatively low number of failures that can be easily detected in early stages. A new fault prognosis procedure is proposed in this paper using a-priori knowledge-based Adaptive Neuro-Fuzzy Inference System (ANFIS). This has the aim to achieve automated detection of significant pitch faults, which are known to be significant failure modes. With the advantage of a-priori knowledge incorporation, the proposed system has improved ability to interpret the previously unseen conditions and thus fault diagnoses are improved. In order to construct the proposed system, the data of the 6 known WT pitch faults were used to train the system with a-priori knowledge incorporated. The effectiveness of the approach was demonstrated using three metrics: (1) the trained system was tested in a new wind farm containing 26 WTs to show its prognosis ability; (2) the first test result was compared to a general alarm approach; (3) a Confusion Matrix analysis was made to demonstrate the accuracy of the proposed approach. The result of this research has demonstrated that the proposed a-priori knowledge-based ANFIS (APK-ANFIS) approach has strong potential for WT pitch fault prognosis. Wind is currently the fastest growing renewable energy source for electrical generation around the world. It is expected that a large number of wind turbines (WTs), especially offshore, will be employed in the near future (EWEA, 2011; Krohn, Morthorst, & Awerbuch, 2009). Following a rapid acceleration of wind energy development in the early 21st century, WT manufacturers are beginning to focus on improving their cost of energy. WT operational performance is critical to the cost of energy. This is because Operation and Maintenance (O&M) costs constitute a significant share of the annual cost of a wind …",
"title": ""
},
{
"docid": "4406b7c9d53b895355fa82b11da21293",
"text": "In today's scenario, World Wide Web (WWW) is flooded with huge amount of information. Due to growing popularity of the internet, finding the meaningful information among billions of information resources on the WWW is a challenging task. The information retrieval (IR) provides documents to the end users which satisfy their need of information. Search engine is used to extract valuable information from the internet. Web crawler is the principal part of search engine; it is an automatic script or program which can browse the WWW in automatic manner. This process is known as web crawling. In this paper, review on strategies of information retrieval in web crawling has been presented that are classifying into four categories viz: focused, distributed, incremental and hidden web crawlers. Finally, on the basis of user customized parameters the comparative analysis of various IR strategies has been performed.",
"title": ""
},
{
"docid": "55370f9487be43f2fbd320c903005185",
"text": "Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statisticsbased methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever “copy-paste” procedure, which stitches together large regions of the sample. Hybrid methods try to combines ideas from both approaches to avoid their hurdles. Current methods, including the recent CNN approaches, are able to produce impressive synthesis on various kinds of textures. Nevertheless, most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly.",
"title": ""
},
{
"docid": "89703b730ff63548530bdb9e2ce59c6b",
"text": "How to develop creative digital products which really meet the prosumer's needs while promoting a positive user experience? That question has guided this work looking for answers through different disciplinary fields. Born on 2002 as an Engineering PhD dissertation, since 2003 the method has been improved by teaching it to Communication and Design graduate and undergraduate courses. It also guided some successful interdisciplinary projects. Its main focus is on developing a creative conceptual model that might meet a human need within its context. The resulting method seeks: (1) solutions for the main problems detected in the previous versions; (2) significant ways to represent Design practices; (3) a set of activities that could be developed by people without programming knowledge. The method and its research current state are presented in this work.",
"title": ""
},
{
"docid": "c804aa80440827033fa787723d23c698",
"text": "The present paper analyzes the self-generated explanations (from talk-aloud protocols) that “Good” ond “Poor” students produce while studying worked-out exomples of mechanics problems, and their subsequent reliance on examples during problem solving. We find that “Good” students learn with understanding: They generate many explanations which refine and expand the conditions for the action ports of the exomple solutions, ond relate these actions to principles in the text. These self-explanations are guided by accurate monitoring of their own understanding and misunderstanding. Such learning results in example-independent knowledge and in a better understanding of the principles presented in the text. “Poor” students do not generate sufficient self-explonations, monitor their learning inaccurately, and subsequently rely heovily an examples. We then discuss the role of self-explanations in facilitating problem solving, as well OS the adequacy of current Al models of explanation-based learning to account for these psychological findings.",
"title": ""
},
{
"docid": "cf020ec1d5fbaa42d4699b16d27434d0",
"text": "Direct methods for restoration of images blurred by motion are analyzed and compared. The term direct means that the considered methods are performed in a one-step fashion without any iterative technique. The blurring point-spread function is assumed to be unknown, and therefore the image restoration process is called blind deconvolution. What is believed to be a new direct method, here called the whitening method, was recently developed. This method and other existing direct methods such as the homomorphic and the cepstral techniques are studied and compared for a variety of motion types. Various criteria such as quality of restoration, sensitivity to noise, and computation requirements are considered. It appears that the recently developed method shows some improvements over other older methods. The research presented here clarifies the differences among the direct methods and offers an experimental basis for choosing which blind deconvolution method to use. In addition, some improvements on the methods are suggested.",
"title": ""
},
{
"docid": "b4714cacd13600659e8a94c2b8271697",
"text": "AIM AND OBJECTIVE\nExamine the pharmaceutical qualities of cannabis including a historical overview of cannabis use. Discuss the use of cannabis as a clinical intervention for people experiencing palliative care, including those with life-threatening chronic illness such as multiple sclerosis and motor neurone disease [amyotrophic lateral sclerosis] in the UK.\n\n\nBACKGROUND\nThe non-medicinal use of cannabis has been well documented in the media. There is a growing scientific literature on the benefits of cannabis in symptom management in cancer care. Service users, nurses and carers need to be aware of the implications for care and treatment if cannabis is being used medicinally.\n\n\nDESIGN\nA comprehensive literature review.\n\n\nMETHOD\nLiterature searches were made of databases from 1996 using the term cannabis and the combination terms of cannabis and palliative care; symptom management; cancer; oncology; chronic illness; motor neurone disease/amyotrophic lateral sclerosis; and multiple sclerosis. Internet material provided for service users searching for information about the medicinal use of cannabis was also examined.\n\n\nRESULTS\nThe literature on the use of cannabis in health care repeatedly refers to changes for users that may be equated with improvement in quality of life as an outcome of its use. This has led to increased use of cannabis by these service users. However, the cannabis used is usually obtained illegally and can have consequences for those who choose to use it for its therapeutic value and for nurses who are providing care.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nQuestions and dilemmas are raised concerning the role of the nurse when caring and supporting a person making therapeutic use of cannabis.",
"title": ""
},
{
"docid": "0bc7de3f7ac06aa080ec590bdaf4c3b3",
"text": "This paper demonstrates that US prestige-press coverage of global warming from 1988 to 2002 has contributed to a significant divergence of popular discourse from scientific discourse. This failed discursive translation results from an accumulation of tactical media responses and practices guided by widely accepted journalistic norms. Through content analysis of US prestige press— meaning the New York Times, the Washington Post, the Los Angeles Times, and the Wall Street Journal—this paper focuses on the norm of balanced reporting, and shows that the prestige press’s adherence to balance actually leads to biased coverage of both anthropogenic contributions to global warming and resultant action. r 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "03277ef81159827a097c73cd24f8b5c0",
"text": "It is generally accepted that there is something special about reasoning by using mental images. The question of how it is special, however, has never been satisfactorily spelled out, despite more than thirty years of research in the post-behaviorist tradition. This article considers some of the general motivation for the assumption that entertaining mental images involves inspecting a picture-like object. It sets out a distinction between phenomena attributable to the nature of mind to what is called the cognitive architecture, and ones that are attributable to tacit knowledge used to simulate what would happen in a visual situation. With this distinction in mind, the paper then considers in detail the widely held assumption that in some important sense images are spatially displayed or are depictive, and that examining images uses the same mechanisms that are deployed in visual perception. I argue that the assumption of the spatial or depictive nature of images is only explanatory if taken literally, as a claim about how images are physically instantiated in the brain, and that the literal view fails for a number of empirical reasons--for example, because of the cognitive penetrability of the phenomena cited in its favor. Similarly, while it is arguably the case that imagery and vision involve some of the same mechanisms, this tells us very little about the nature of mental imagery and does not support claims about the pictorial nature of mental images. Finally, I consider whether recent neuroscience evidence clarifies the debate over the nature of mental images. I claim that when such questions as whether images are depictive or spatial are formulated more clearly, the evidence does not provide support for the picture-theory over a symbol-structure theory of mental imagery. Even if all the empirical claims were true, they do not warrant the conclusion that many people have drawn from them: that mental images are depictive or are displayed in some (possibly cortical) space. Such a conclusion is incompatible with what is known about how images function in thought. We are then left with the provisional counterintuitive conclusion that the available evidence does not support rejection of what I call the \"null hypothesis\"; namely, that reasoning with mental images involves the same form of representation and the same processes as that of reasoning in general, except that the content or subject matter of thoughts experienced as images includes information about how things would look.",
"title": ""
},
{
"docid": "7d301fc945abe95cef82cb56e98e6cfe",
"text": "Many modern applications are a mixture of streaming, transactional and analytical workloads. However, traditional data platforms are each designed for supporting a specific type of workload. The lack of a single platform to support all these workloads has forced users to combine disparate products in custom ways. The common practice of stitching heterogeneous environments has caused enormous production woes by increasing complexity and the total cost of ownership. To support this class of applications, we present SnappyData as the first unified engine capable of delivering analytics, transactions, and stream processing in a single integrated cluster. We build this hybrid engine by carefully marrying a big data computational engine (Apache Spark) with a scale-out transactional store (Apache GemFire). We study and address the challenges involved in building such a hybrid distributed system with two conflicting components designed on drastically different philosophies: one being a lineage-based computational model designed for high-throughput analytics, the other a consensusand replication-based model designed for low-latency operations.",
"title": ""
}
] |
scidocsrr
|
cd4fef6db7a2a054c813b3bf27d67f64
|
Scalable high-performance architecture for convolutional ternary neural networks on FPGA
|
[
{
"docid": "b7d13c090e6d61272f45b1e3090f0341",
"text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"title": ""
},
{
"docid": "87daab52e390eaeff7da0ad7dafe728a",
"text": "The computation and storage requirements for Deep Neural Networks (DNNs) are usually high. This issue limits their deployability on ubiquitous computing devices such as smart phones, wearables and autonomous drones. In this paper, we propose ternary neural networks (TNNs) in order to make deep learning more resource-efficient. We train these TNNs using a teacher-student approach based on a novel, layer-wise greedy methodology. Thanks to our two-stage training procedure, the teacher network is still able to use state-of-the-art methods such as dropout and batch normalization to increase accuracy and reduce training time. Using only ternary weights and activations, the student ternary network learns to mimic the behavior of its teacher network without using any multiplication. Unlike its {-1,1} binary counterparts, a ternary neural network inherently prunes the smaller weights by setting them to zero during training. This makes them sparser and thus more energy-efficient. We design a purpose-built hardware architecture for TNNs and implement it on FPGA and ASIC. We evaluate TNNs on several benchmark datasets and demonstrate up to 3.1 χ better energy efficiency with respect to the state of the art while also improving accuracy.",
"title": ""
}
] |
[
{
"docid": "b280d6115add9407a08de94d34fe47d2",
"text": "Terabytes of data are generated day-to-day from modern information systems, cloud computing and digital technologies, as the increasing number of Internet connected devices grows. However, the analysis of these massive data requires many efforts at multiple levels for knowledge extraction and decision making. Therefore, Big Data Analytics is a current area of research and development that has become increasingly important. This article investigates cutting-edge research efforts aimed at analyzing Internet of Things (IoT) data. The basic objective of this article is to explore the potential impact of large data challenges, research efforts directed towards the analysis of IoT data and various tools associated with its analysis. As a result, this article suggests the use of platforms to explore big data in numerous stages and better understand the knowledge we can draw from the data, which opens a new horizon for researchers to develop solutions based on open research challenges and topics.",
"title": ""
},
{
"docid": "eb31a7242c682b3683ce9659ce32b7c9",
"text": "Code smells are symptoms of poor design and implementation choices that may hinder code comprehension, and possibly increase changeand fault-proneness. While most of the detection techniques just rely on structural information, many code smells are intrinsically characterized by how code elements change overtime. In this paper, we propose Historical Information for Smell deTection (HIST), an approach exploiting change history information to detect instances of five different code smells, namely Divergent Change, Shotgun Surgery, Parallel Inheritance, Blob, and Feature Envy. We evaluate HIST in two empirical studies. The first, conducted on 20 open source projects, aimed at assessing the accuracy of HIST in detecting instances of the code smells mentioned above. The results indicate that the precision of HIST ranges between 72 and 86 percent, and its recall ranges between 58 and 100 percent. Also, results of the first study indicate that HIST is able to identify code smells that cannot be identified by competitive approaches solely based on code analysis of a single system's snapshot. Then, we conducted a second study aimed at investigating to what extent the code smells detected by HIST (and by competitive code analysis techniques) reflect developers' perception of poor design and implementation choices. We involved 12 developers of four open source projects that recognized more than 75 percent of the code smell instances identified by HIST as actual design/implementation problems.",
"title": ""
},
{
"docid": "c42edb326ec95c257b821cc617e174e6",
"text": "recommendation systems support users and developers of various computer and software systems to overcome information overload, perform information discovery tasks and approximate computation, among others. They have recently become popular and have attracted a wide variety of application scenarios from business process modelling to source code manipulation. Due to this wide variety of application domains, different approaches and metrics have been adopted for their evaluation. In this chapter, we review a range of evaluation metrics and measures as well as some approaches used for evaluating recommendation systems. The metrics presented in this chapter are grouped under sixteen different dimensions, e.g., correctness, novelty, coverage. We review these metrics according to the dimensions to which they correspond. A brief overview of approaches to comprehensive evaluation using collections of recommendation system dimensions and associated metrics is presented. We also provide suggestions for key future research and practice directions. Iman Avazpour Faculty of ICT, Centre for Computing and Engineering Software and Systems (SUCCESS), Swinburne University of Technology, Hawthorn, Victoria 3122, Australia e-mail: iavazpour@swin.",
"title": ""
},
{
"docid": "31328c32656d25d00d45a714df0f6d94",
"text": "In a heterogeneous cellular network (HetNet) consisting of $M$ tiers of densely-deployed base stations (BSs), consider that each of the BSs in the HetNet that are associated with multiple users is able to simultaneously schedule and serve two users in a downlink time slot by performing the (power-domain) non-orthogonal multiple access (NOMA) scheme. This paper aims at the preliminary study on the downlink coverage performance of the HetNet with the non-cooperative and the proposed cooperative NOMA schemes. First, we study the coverage probability of the NOMA users for the non-cooperative NOMA scheme in which no BSs are coordinated to jointly transmit the NOMA signals for a particular cell and the coverage probabilities of the two NOMA users of the BSs in each tier are derived. We show that the coverage probabilities can be largely reduced if allocated transmit powers for the NOMA users are not satisfied with some constraints. Next, we study and derive the coverage probabilities for the proposed cooperative NOMA scheme in which the void BSs that are not tagged by any users are coordinated to enhance the far NOMA user in a particular cell. Our analyses show that cooperative NOMA can significantly improve the coverage of all NOMA users as long as the transmit powers for the NOMA users are properly allocated.",
"title": ""
},
{
"docid": "255a707951238ace366ef1ea0df833fc",
"text": "During the last decade, researchers have verified that clothing can provide information for gender recognition. However, before extracting features, it is necessary to segment the clothing region. We introduce a new clothes segmentation method based on the application of the GrabCut technique over a trixel mesh, obtaining very promising results for a close to real time system. Finally, the clothing features are combined with facial and head context information to outperform previous results in gender recognition with a public database.",
"title": ""
},
{
"docid": "add30dc8d14a26eba48dbe5baaaf4169",
"text": "The authors investigated whether intensive musical experience leads to enhancements in executive processing, as has been shown for bilingualism. Young adults who were bilinguals, musical performers (instrumentalists or vocalists), or neither completed 3 cognitive measures and 2 executive function tasks based on conflict. Both executive function tasks included control conditions that assessed performance in the absence of conflict. All participants performed equivalently for the cognitive measures and the control conditions of the executive function tasks, but performance diverged in the conflict conditions. In a version of the Simon task involving spatial conflict between a target cue and its position, bilinguals and musicians outperformed monolinguals, replicating earlier research with bilinguals. In a version of the Stroop task involving auditory and linguistic conflict between a word and its pitch, the musicians performed better than the other participants. Instrumentalists and vocalists did not differ on any measure. Results demonstrate that extended musical experience enhances executive control on a nonverbal spatial task, as previously shown for bilingualism, but also enhances control in a more specialized auditory task, although the effect of bilingualism did not extend to that domain.",
"title": ""
},
{
"docid": "8b675cc47b825268837a7a2b5a298dc9",
"text": "Artificial Intelligence chatbot is a technology that makes interaction between man and machine possible by using natural language. In this paper, we proposed an architectural design of a chatbot that will function as virtual diabetes physician/doctor. This chatbot will allow diabetic patients to have a diabetes control/management advice without the need to go to the hospital. A general history of a chatbot, a brief description of each chatbots is discussed. We proposed the design of a new technique that will be implemented in this chatbot as the key component to function as diabetes physician. Using this design, chatbot will remember the conversation path through parameter called Vpath. Vpath will allow chatbot to gives a response that is mostly suitable for the whole conversation as it specifically designed to be a virtual diabetes physician.",
"title": ""
},
{
"docid": "9958d07645e35ec725dbcf4e11ffc0b1",
"text": "A bed exiting monitoring system with fall detection function for the elderly living alone is proposed in this paper. By separating the process of exiting or getting on the bed into several significant movements, the sensor system composed of infrared and pressure sensors attached to the bed will correspondingly respond to these movements. Using the finite state machine (FSM) method, the bed exiting state and fall events can be detected according to specific transitions recognized by the sensor system. Experiments with plausible assessment are conducted to find the optimal sensor combination solution and to verify the FSM algorithm, which is demonstrated feasible and effective in practical use.",
"title": ""
},
{
"docid": "7793601ae7788b8a7b1082f3757cf1ab",
"text": "In this paper we present a reference data set that we are making publicly available to the indoor navigation community [8]. This reference data is intended for the analysis and verification of algorithms based on foot mounted inertial sensors. Furthermore, we describe our data collection methodology that is applicable to the analysis of a broad range of indoor navigation approaches. We employ a high precision optical reference system that is traditionally being used in the film industry for human motion capturing and in applications such as analysis of human motion in sports and medical rehabilitation. The data set provides measurements from a six degrees of freedom foot mounted inertial MEMS sensor array, as well as synchronous high resolution data from the optical tracking system providing ground truth for location and orientation. We show the use of this reference data set by comparing the performance of algorithms for an essential part of pedestrian dead reckoning systems for positioning, namely identification of the rest phase during the human gait cycle.",
"title": ""
},
{
"docid": "9824b33621ad02c901a9e16895d2b1a6",
"text": "Objective This systematic review aims to summarize current evidence on which naturally present cannabinoids contribute to cannabis psychoactivity, considering their reported concentrations and pharmacodynamics in humans. Design Following PRISMA guidelines, papers published before March 2016 in Medline, Scopus-Elsevier, Scopus, ISI-Web of Knowledge and COCHRANE, and fulfilling established a-priori selection criteria have been included. Results In 40 original papers, three naturally present cannabinoids (∆-9-Tetrahydrocannabinol, ∆-8-Tetrahydrocannabinol and Cannabinol) and one human metabolite (11-OH-THC) had clinical relevance. Of these, the metabolite produces the greatest psychoactive effects. Cannabidiol (CBD) is not psychoactive but plays a modulating role on cannabis psychoactive effects. The proportion of 9-THC in plant material is higher (up to 40%) than in other cannabinoids (up to 9%). Pharmacodynamic reports vary due to differences in methodological aspects (doses, administration route and volunteers' previous experience with cannabis). Conclusions Findings reveal that 9-THC contributes the most to cannabis psychoactivity. Due to lower psychoactive potency and smaller proportions in plant material, other psychoactive cannabinoids have a weak influence on cannabis final effects. Current lack of standard methodology hinders homogenized research on cannabis health effects. Working on a standard cannabis unit considering 9-THC is recommended.",
"title": ""
},
{
"docid": "aa8ae1fc471c46b5803bfa1303cb7001",
"text": "It is widely recognized that steganography with sideinformation in the form of a precover at the sender enjoys significantly higher empirical security than other embedding schemes. Despite the success of side-informed steganography, current designs are purely heuristic and little has been done to develop the embedding rule from first principles. Building upon the recently proposed MiPOD steganography, in this paper we impose multivariate Gaussian model on acquisition noise and estimate its parameters from the available precover. The embedding is then designed to minimize the KL divergence between cover and stego distributions. In contrast to existing heuristic algorithms that modulate the embedding costs by 1–2|e|, where e is the rounding error, in our model-based approach the sender should modulate the steganographic Fisher information, which is a loose equivalent of embedding costs, by (1–2|e|)^2. Experiments with uncompressed and JPEG images show promise of this theoretically well-founded approach. Introduction Steganography is a privacy tool in which messages are embedded in inconspicuous cover objects to hide the very presence of the communicated secret. Digital media, such as images, video, and audio are particularly suitable cover sources because of their ubiquity and the fact that they contain random components, the acquisition noise. On the other hand, digital media files are extremely complex objects that are notoriously hard to describe with sufficiently accurate and estimable statistical models. This is the main reason for why current steganography in such empirical sources [3] lacks perfect security and heavily relies on heuristics, such as embedding “costs” and intuitive modulation factors. Similarly, practical steganalysis resorts to increasingly more complex high-dimensional descriptors (rich models) and advanced machine learning paradigms, including ensemble classifiers and deep learning. Often, a digital media object is subjected to processing and/or format conversion prior to embedding the secret. The last step in the processing pipeline is typically quantization. In side-informed steganography with precover [21], the sender makes use of the unquantized cover values during embedding to hide data in a more secure manner. The first embedding scheme of this type described in the literature is the embedding-while-dithering [14] in which the secret message was embedded by perturbing the process of color quantization and dithering when converting a true-color image to a palette format. Perturbed quantization [15] started another direction in which rounding errors of DCT coefficients during JPEG compression were used to modify the embedding algorithm. This method has been advanced through a series of papers [23, 24, 29, 20], culminating with approaches based on advanced coding techniques with a high level of empirical security [19, 18, 6]. Side-information can have many other forms. Instead of one precover, the sender may have access to the acquisition oracle (a camera) and take multiple images of the same scene. These multiple exposures can be used to estimate the acquisition noise and also incorporated during embedding. This research direction has been developed to a lesser degree compared to steganography with precover most likely due to the difficulty of acquiring the required imagery and modeling the differences between acquisitions. In a series of papers [10, 12, 11], Franz et al. proposed a method in which multiple scans of the same printed image on a flat-bed scanner were used to estimate the model of the acquisition noise at every pixel. This requires acquiring a potentially large number of scans, which makes this approach rather labor intensive. Moreover, differences in the movement of the scanner head between individual scans lead to slight spatial misalignment that complicates using this type of side-information properly. Recently, the authors of [7] showed how multiple JPEG images of the same scene can be used to infer the preferred direction of embedding changes. By working with quantized DCT coefficients instead of pixels, the embedding is less sensitive to small differences between multiple acquisitions. Despite the success of side-informed schemes, there appears to be an alarming lack of theoretical analysis that would either justify the heuristics or suggest a well-founded (and hopefully more powerful) approach. In [13], the author has shown that the precover compensates for the lack of the cover model. In particular, for a Gaussian model of acquisition noise, precover-informed rounding is more secure than embedding designed to preserve the cover model estimated from the precover image assuming the cover is “sufficiently non-stationary.” Another direction worth mentioning in this context is the bottom-up model-based approach recently proposed by Bas [2]. The author showed that a high-capacity steganographic scheme with a rather low empirical detectability can be constructed when the process of digitally developing a RAW sensor capture is sufficiently simplified. The impact of embedding is masked as an increased level of photonic noise, e.g., due to a higher ISO setting. It will likely be rather difficult, however, to extend this approach to realistic processing pipelines. Inspired by the success of the multivariate Gaussian model in steganography for digital images [25, 17, 26], in this paper we adopt the same model for the precover and then derive the embedding rule to minimize the KL divergence between cover and stego distributions. The sideinformation is used to estimate the parameters of the acquisition noise and the noise-free scene. In the next section, we review current state of the art in heuristic side-informed steganography with precover. In the following section, we introduce a formal model of image acquisition. In Section “Side-informed steganography with MVG acquisition noise”, we describe the proposed model-based embedding method, which is related to heuristic approaches in Section “Connection to heuristic schemes.” The main bulk of results from experiments on images represented in the spatial and JPEG domain appear in Section “Experiments.” In the subsequent section, we investigate whether the public part of the selection channel, the content adaptivity, can be incorporated in selection-channel-aware variants of steganalysis features to improve detection of side-informed schemes. The paper is then closed with Conclusions. The following notation is adopted for technical arguments. Matrices and vectors will be typeset in boldface, while capital letters are reserved for random variables with the corresponding lower case symbols used for their realizations. In this paper, we only work with grayscale cover images. Precover values will be denoted with xij ∈ R, while cover and stego values will be integer arrays cij and sij , 1 ≤ i ≤ n1, 1 ≤ j ≤ n2, respectively. The symbols [x], dxe, and bxc are used for rounding and rounding up and down the value of x. By N (μ,σ2), we understand Gaussian distribution with mean μ and variance σ2. The complementary cumulative distribution function of a standard normal variable (the tail probability) will be denoted Q(x) = ∫∞ x (2π)−1/2 exp ( −z2/2 ) dz. Finally, we say that f(x)≈ g(x) when limx→∞ f(x)/g(x) = 1. Prior art in side-informed steganography with precover All modern steganographic schemes, including those that use side-information, are implemented within the paradigm of distortion minimization. First, each cover element cij is assigned a “cost” ρij that measures the impact on detectability should that element be modified during embedding. The payload is then embedded while minimizing the sum of costs of all changed cover elements, ∑ cij 6=sij ρij . A steganographic scheme that embeds with the minimal expected cost changes each cover element with probability βij = exp(−λρij) 1 +exp(−λρij) , (1) if the embedding operation is constrained to be binary, and βij = exp(−λρij) 1 +2exp(−λρij) , (2) for a ternary scheme with equal costs of changing cij to cij ± 1. Syndrome-trellis codes [8] can be used to build practical embedding schemes that operate near the rate–distortion bound. For steganography designed to minimize costs (embedding distortion), a popular heuristic to incorporate a precover value xij during embedding is to modulate the costs based on the rounding error eij = cij − xij , −1/2≤ eij ≤ 1/2 [23, 29, 20, 18, 19, 6, 24]. A binary embedding scheme modulates the cost of changing cij = [xij ] to [xij ] + sign(eij) by 1−2|eij |, while prohibiting the change to [xij ]− sign(eij): ρij(sign(eij)) = (1−2|eij |)ρij (3) ρij(−sign(eij)) = Ω, (4) where ρij(u) is the cost of modifying the cover value by u∈ {−1,1}, ρij are costs of some additive embedding scheme, and Ω is a large constant. This modulation can be justified heuristically because when |eij | ≈ 1/2, a small perturbation of xij could cause cij to be rounded to the other side. Such coefficients are thus assigned a proportionally smaller cost because 1− 2|eij | ≈ 0. On the other hand, the costs are unchanged when eij ≈ 0, as it takes a larger perturbation of the precover to change the rounded value. A ternary version of this embedding strategy [6] allows modifications both ways with costs: ρij(sign(eij)) = (1−2|eij |)ρij (5) ρij(−sign(eij)) = ρij . (6) Some embedding schemes do not use costs and, instead, minimize statistical detectability. In MiPOD [25], the embedding probabilities βij are derived from their impact on the cover multivariate Gaussian model by solving the following equation for each pixel ij: βijIij = λ ln 1−2βij βij , (7) where Iij = 2/σ̂4 ij is the Fisher information with σ̂ 2 ij an estimated variance of the acquisition noise at pixel ij, and λ is a Lagrange multiplier determined by the payload size. To incorporate the side-information, the sender first converts the embedding probabilities into costs and then modulates them as in (3) or (5). This can be done b",
"title": ""
},
{
"docid": "bae2f948eca1dc88cbcd5cb2e6165d3b",
"text": "Important attributes of 3D brain cortex segmentation algorithms include robustness, accuracy, computational efficiency, and facilitation of user interaction, yet few algorithms incorporate all of these traits. Manual segmentation is highly accurate but tedious and laborious. Most automatic techniques, while less demanding on the user, are much less accurate. It would be useful to employ a fast automatic segmentation procedure to do most of the work but still allow an expert user to interactively guide the segmentation to ensure an accurate final result. We propose a novel 3D brain cortex segmentation procedure utilizing dual-front active contours which minimize image-based energies in a manner that yields flexibly global minimizers based on active regions. Region-based information and boundary-based information may be combined flexibly in the evolution potentials for accurate segmentation results. The resulting scheme is not only more robust but much faster and allows the user to guide the final segmentation through simple mouse clicks which add extra seed points. Due to the flexibly global nature of the dual-front evolution model, single mouse clicks yield corrections to the segmentation that extend far beyond their initial locations, thus minimizing the user effort. Results on 15 simulated and 20 real 3D brain images demonstrate the robustness, accuracy, and speed of our scheme compared with other methods.",
"title": ""
},
{
"docid": "8010361144a7bd9fc336aba88f6e8683",
"text": "Moving garments and other cloth objects exhibit dynamic, complex wrinkles. Generating such wrinkles in a virtual environment currently requires either a time-consuming manual design process, or a computationally expensive simulation, often combined with accurate parameter-tuning requiring specialized animator skills. Our work presents an alternative approach for wrinkle generation which combines coarse cloth animation with a post-processing step for efficient generation of realistic-looking fine dynamic wrinkles. Our method uses the stretch tensor of the coarse animation output as a guide for wrinkle placement. To ensure temporal coherence, the placement mechanism uses a space-time approach allowing not only for smooth wrinkle appearance and disappearance, but also for wrinkle motion, splitting, and merging over time. Our method generates believable wrinkle geometry using specialized curve-based implicit deformers. The method is fully automatic and has a single user control parameter that enables the user to mimic different fabrics.",
"title": ""
},
{
"docid": "f05f4c731c6ae024026dbde007bf5b38",
"text": "While the first two functions are essential to switching power supplies, the latter has universal applications. Mixed-signal circuits, for instance, typically incur clock-synchronized load-current events that are faster than any active power supply circuit can supply, and do so while only surviving small variations in voltage. The result of these transient current excursions is noisy voltages, be they supply lines or data links. Capacitors are used to mitigate these effects, to supply and/or shunt the transient currents the power supply circuit is not quick enough to deliver, which is why a typical high performance system is sprinkled with many nanoand micro-Farad capacitors.",
"title": ""
},
{
"docid": "1a6ec9678c5ee8aa0861e6c606c22330",
"text": "Today millions of web-users express their opinions about many topics through blogs, wikis, fora, chats and social networks. For sectors such as e-commerce and e-tourism, it is very useful to automatically analyze the huge amount of social information available on the Web, but the extremely unstructured nature of these contents makes it a difficult task. SenticNet is a publicly available resource for opinion mining built exploiting AI and Semantic Web techniques. It uses dimensionality reduction to infer the polarity of common sense concepts and hence provide a public resource for mining opinions from natural language text at a semantic, rather than just syntactic, level.",
"title": ""
},
{
"docid": "0da299fb53db5980a10e0ae8699d2209",
"text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.",
"title": ""
},
{
"docid": "9c8583dd46ef6ca49d7a9298377b755a",
"text": "Traditional radio planning tools present a steep learning curve. We present BotRf, a Telegram Bot that facilitates the process by guiding non-experts in assessing the feasibility of radio links. Built on open source tools, BotRf can run on any smartphone or PC running Telegram. Using it on a smartphone has the added value that the Bot can leverage the internal GPS to enter coordinates. BotRf can be used in environments with low bandwidth as the generated data traffic is quite limited. We present examples of its use in Venezuela.",
"title": ""
},
{
"docid": "4a65fcbc395eab512d8a7afe33c0f5ae",
"text": "In eukaryotes, the spindle-assembly checkpoint (SAC) is a ubiquitous safety device that ensures the fidelity of chromosome segregation in mitosis. The SAC prevents chromosome mis-segregation and aneuploidy, and its dysfunction is implicated in tumorigenesis. Recent molecular analyses have begun to shed light on the complex interaction of the checkpoint proteins with kinetochores — structures that mediate the binding of spindle microtubules to chromosomes in mitosis. These studies are finally starting to reveal the mechanisms of checkpoint activation and silencing during mitotic progression.",
"title": ""
},
{
"docid": "ce020748bd9bc7529036aa41dcd59a92",
"text": "In this paper a new isolated SEPIC converter which is a proper choice for PV applications, is introduced and analyzed. The proposed converter has the advantage of high voltage gain while the switch voltage stress is same as a regular SEPIC converter. The converter operating modes are discussed and design considerations are presented. Also simulation results are illustrated which justifies the theoretical analysis. Finally the proposed converter is improved using active clamp technique.",
"title": ""
}
] |
scidocsrr
|
5a1c1103fe0ec99a1fb094ceba3fcba5
|
BlurMe: inferring and obfuscating user gender based on ratings
|
[
{
"docid": "6b5c3a9f31151ef62f19085195ff5fc5",
"text": "We consider the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users. Specifically, we consider the Netflix Prize data set, and its leading algorithms, adapted to the framework of differential privacy.\n Unlike prior privacy work concerned with cryptographically securing the computation of recommendations, differential privacy constrains a computation in a way that precludes any inference about the underlying records from its output. Such algorithms necessarily introduce uncertainty--i.e., noise--to computations, trading accuracy for privacy.\n We find that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy. To adapt these algorithms, we explicitly factor them into two parts, an aggregation/learning phase that can be performed with differential privacy guarantees, and an individual recommendation phase that uses the learned correlations and an individual's data to provide personalized recommendations. The adaptations are non-trivial, and involve both careful analysis of the per-record sensitivity of the algorithms to calibrate noise, as well as new post-processing steps to mitigate the impact of this noise.\n We measure the empirical trade-off between accuracy and privacy in these adaptations, and find that we can provide non-trivial formal privacy guarantees while still outperforming the Cinematch baseline Netflix provides.",
"title": ""
}
] |
[
{
"docid": "0d8c5526a5e5e69c644f27e11ecbfd5d",
"text": "Multi-view learning can provide self-supervision when different views are available of the same data. The distributional hypothesis provides another form of useful self-supervision from adjacent sentences which are plentiful in large unlabelled corpora. Motivated by the asymmetry in the two hemispheres of the human brain as well as the observation that different learning architectures tend to emphasise different aspects of sentence meaning, we create a unified multi-view sentence representation learning framework, in which, one view encodes the input sentence with a Recurrent Neural Network (RNN), and the other view encodes it with a simple linear model, and the training objective is to maximise the agreement specified by the adjacent context information between two views. We show that, after training, the vectors produced from our multi-view training provide improved representations over the single-view training, and the combination of different views gives further representational improvement and demonstrates solid transferability on standard downstream tasks.",
"title": ""
},
{
"docid": "50e081b178a1a308c61aae4a29789816",
"text": "The ability to engineer enzymes and other proteins to any desired stability would have wide-ranging applications. Here, we demonstrate that computational design of a library with chemically diverse stabilizing mutations allows the engineering of drastically stabilized and fully functional variants of the mesostable enzyme limonene epoxide hydrolase. First, point mutations were selected if they significantly improved the predicted free energy of protein folding. Disulfide bonds were designed using sampling of backbone conformational space, which tripled the number of experimentally stabilizing disulfide bridges. Next, orthogonal in silico screening steps were used to remove chemically unreasonable mutations and mutations that are predicted to increase protein flexibility. The resulting library of 64 variants was experimentally screened, which revealed 21 (pairs of) stabilizing mutations located both in relatively rigid and in flexible areas of the enzyme. Finally, combining 10-12 of these confirmed mutations resulted in multi-site mutants with an increase in apparent melting temperature from 50 to 85°C, enhanced catalytic activity, preserved regioselectivity and a >250-fold longer half-life. The developed Framework for Rapid Enzyme Stabilization by Computational libraries (FRESCO) requires far less screening than conventional directed evolution.",
"title": ""
},
{
"docid": "729fac8328b57376a954f2e7fc10405e",
"text": "Generative Adversarial Networks are proved to be efficient on various kinds of image generation tasks. However, it is still a challenge if we want to generate images precisely. Many researchers focus on how to generate images with one attribute. But image generation under multiple attributes is still a tough work. In this paper, we try to generate a variety of face images under multiple constraints using a pipeline process. The Pip-GAN (Pipeline Generative Adversarial Network) we present employs a pipeline network structure which can generate a complex facial image step by step using a neutral face image. We applied our method on two face image databases and demonstrate its ability to generate convincing novel images of unseen identities under multiple conditions previously.",
"title": ""
},
{
"docid": "9a2d79d9df9e596e26f8481697833041",
"text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.",
"title": ""
},
{
"docid": "42f3032626b2a002a855476a718a2b1b",
"text": "Learning controllers for bipedal robots is a challenging problem, often requiring expert knowledge and extensive tuning of parameters that vary in different situations. Recently, deep reinforcement learning has shown promise at automatically learning controllers for complex systems in simulation. This has been followed by a push towards learning controllers that can be transferred between simulation and hardware, primarily with the use of domain randomization. However, domain randomization can make the problem of finding stable controllers even more challenging, especially for underactuated bipedal robots. In this work, we explore whether policies learned in simulation can be transferred to hardware with the use of high-fidelity simulators and structured controllers. We learn a neural network policy which is a part of a more structured controller. While the neural network is learned in simulation, the rest of the controller stays fixed, and can be tuned by the expert as needed. We show that using this approach can greatly speed up the rate of learning in simulation, as well as enable transfer of policies between simulation and hardware. We present our results on an ATRIAS robot and explore the effect of action spaces and cost functions on the rate of transfer between simulation and hardware. Our results show that structured policies can indeed be learned in simulation and implemented on hardware successfully. This has several advantages, as the structure preserves the intuitive nature of the policy, and the neural network improves the performance of the hand-designed policy. In this way, we propose a way of using neural networks to improve expert designed controllers, while maintaining ease of understanding.",
"title": ""
},
{
"docid": "2258a0ba739557d489a796f050fad3e0",
"text": "The term fractional calculus is more than 300 years old. It is a generalization of the ordinary differentiation and integration to non-integer (arbitrary) order. The subject is as old as the calculus of differentiation and goes back to times when Leibniz, Gauss, and Newton invented this kind of calculation. In a letter to L’Hospital in 1695 Leibniz raised the following question (Miller and Ross, 1993): “Can the meaning of derivatives with integer order be generalized to derivatives with non-integer orders?\" The story goes that L’Hospital was somewhat curious about that question and replied by another question to Leibniz. “What if the order will be 1/2?\" Leibniz in a letter dated September 30, 1695 replied: “It will lead to a paradox, from which one day useful consequences will be drawn.\" The question raised by Leibniz for a fractional derivative was an ongoing topic in the last 300 years. Several mathematicians contributed to this subject over the years. People like Liouville, Riemann, and Weyl made major contributions to the theory of fractional calculus. The story of the fractional calculus continued with contributions from Fourier, Abel, Leibniz, Grünwald, and Letnikov. Nowadays, the fractional calculus attracts many scientists and engineers. There are several applications of this mathematical phenomenon in mechanics, physics, chemistry, control theory and so on (Caponetto et al., 2010; Magin, 2006; Monje et al., 2010; Oldham and Spanier, 1974; Oustaloup, 1995; Podlubny, 1999). It is natural that many authors tried to solve the fractional derivatives, fractional integrals and fractional differential equations in Matlab. A few very good and interesting Matlab functions were already submitted to the MathWorks, Inc. Matlab Central File Exchange, where they are freely downloadable for sharing among the users. In this chapter we will use some of them. It is worth mentioning some addition to Matlab toolboxes, which are appropriate for the solution of fractional calculus problems. One of them is a toolbox created by CRONE team (CRONE, 2010) and another one is the Fractional State–Space Toolkit developed by Dominik Sierociuk (Sierociuk, 2005). Last but not least we should also mention a Matlab toolbox created by Dingyü Xue (Xue, 2010), which is based on Matlab object for fractional-order transfer function and some manipulation with this class of the transfer function. Despite that the mentioned toolboxes are mainly for control systems, they can be “abused\" for solutions of general problems related to fractional calculus as well. 10",
"title": ""
},
{
"docid": "d767a741ee5794a71de1afb84169f1b8",
"text": "The advent of Machine Learning as a Service (MLaaS) makes it possible to outsource a visual object recognition task to an external (e.g. cloud) provider. However, outsourcing such an image classification task raises privacy concerns, both from the image provider’s perspective, who wishes to keep their images confidential, and from the classification algorithm provider’s perspective, who wishes to protect the intellectual property of their classifier. We propose PICS, a private image classification system, based on polynomial kernel support vector machine (SVM) learning. We selected SVM because it allows us to apply only low-degree functions for the classification on private data, which is the reason why our solution remains computationally efficient. Our solution is based on Secure Multiparty Computation (MPC), it does not leak any information about the images to be classified, nor about the classifier parameters, and it is provably secure. We demonstrate the practicality of our approach by conducting experiments on realistic datasets. We show that our approach achieves high accuracy, comparable to that achieved on non-privacy-protected data while the input-dependent phase is at least 100 times faster than the similar approach with Fully Homomorphic Encryption.",
"title": ""
},
{
"docid": "4ec266df91a40330b704c4e10eacb820",
"text": "Recently many cases of missing children between ages 14 and 17 years are reported. Parents always worry about the possibility of kidnapping of their children. This paper proposes an Android based solution to aid parents to track their children in real time. Nowadays, most mobile phones are equipped with location services capabilities allowing us to get the device’s geographic position in real time. The proposed solution takes the advantage of the location services provided by mobile phone since most of kids carry mobile phones. The mobile application use the GPS and SMS services found in Android mobile phones. It allows the parent to get their child’s location on a real time map. The system consists of two sides, child side and parent side. A parent’s device main duty is to send a request location SMS to the child’s device to get the location of the child. On the other hand, the child’s device main responsibility is to reply the GPS position to the parent’s device upon request. Keywords—Child Tracking System, Global Positioning System (GPS), SMS-based Mobile Application.",
"title": ""
},
{
"docid": "064aba7f2bd824408bd94167da5d7b3a",
"text": "Online comments submitted by readers of news articles can provide valuable feedback and critique, personal views and perspectives, and opportunities for discussion. The varying quality of these comments necessitates that publishers remove the low quality ones, but there is also a growing awareness that by identifying and highlighting high quality contributions this can promote the general quality of the community. In this paper we take a user-centered design approach towards developing a system, CommentIQ, which supports comment moderators in interactively identifying high quality comments using a combination of comment analytic scores as well as visualizations and flexible UI components. We evaluated this system with professional comment moderators working at local and national news outlets and provide insights into the utility and appropriateness of features for journalistic tasks, as well as how the system may enable or transform journalistic practices around online comments.",
"title": ""
},
{
"docid": "60c42e3d0d0e82200a80b469a61f1921",
"text": "BACKGROUND\nDespite using sterile technique for catheter insertion, closed drainage systems, and structured daily care plans, catheter-associated urinary tract infections (CAUTIs) regularly occur in acute care hospitals. We believe that meaningful reduction in CAUTI rates can only be achieved by reducing urinary catheter use.\n\n\nMETHODS\nWe used an interventional study of a hospital-wide, multidisciplinary program to reduce urinary catheter use and CAUTIs on all patient care units in a 300-bed, community teaching hospital in Connecticut. Our primary focus was the implementation of a nurse-directed urinary catheter removal protocol. This protocol was linked to the physician's catheter insertion order. Three additional elements included physician documentation of catheter insertion criteria, a device-specific charting module added to physician electronic progress notes, and biweekly unit-specific feedback on catheter use rates and CAUTI rates in a multidisciplinary forum.\n\n\nRESULTS\nWe achieved a 50% hospital-wide reduction in catheter use and a 70% reduction in CAUTIs over a 36-month period, although there was wide variation from unit to unit in catheter reduction efforts, ranging from 4% (maternity) to 74% (telemetry).\n\n\nCONCLUSION\nUrinary catheter use, and ultimately CAUTI rates, can be effectively reduced by the diligent application of relatively few evidence-based interventions. Aggressive implementation of the nurse-directed catheter removal protocol was associated with lower catheter use rates and reduced infection rates.",
"title": ""
},
{
"docid": "3a0d2784b1115e82a4aedad074da8c74",
"text": "The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "93801b742fd2b99b2416b9ab5eb069e7",
"text": "Importance-Performance Analysis (IPA) constitutes an indirect approximation to user's satisfaction measurement that allows to represent, in an easy and functional way, the main points and improvement areas of a specific product or service. Beginning from the importance and judgements concerning the performance that users grant to each prominent attributes of a service, it is possible to obtain a graphic divided into four quadrants in which recommendations for the organization economic resources management are included. Nevertheless, this tool has raised controversies since its origins, referred fundamentally to the placement of the axes that define the quadrants and the conception and measurement of the importance of attributes that compose the service. The primary goal of this article is to propose an alternative to the IPA representation that allows to overcome the limitations and contradictions derived from the original technique, without rejecting the classical graph. The analysis applies to data obtained in a survey about satisfaction with primary health care services of Galicia. Results will permit to advise to primary health care managers with a view toward the planning of future strategic actions.",
"title": ""
},
{
"docid": "46f6001ef4cd4fa02c9edef7ad316094",
"text": "5G will provide broadband access everywhere, entertain higher user mobility, and enable connectivity of massive number of devices (e.g. Internet of Things (IoT)) in an ultrareliable and affordable way. The main technological enablers such as cloud computing, Software Defined Networking (SDN) and Network Function Virtualization (NFV) are maturing towards their use in 5G. However, there are pressing security challenges in these technologies besides the growing concerns for user privacy. In this paper, we provide an overview of the security challenges in these technologies and the issues of privacy in 5G. Furthermore, we present security solutions to these challenges and future directions for secure 5G systems.",
"title": ""
},
{
"docid": "ee2f9d185e7e6b47a79fa8ef3ba227c9",
"text": "Pedestrian behavior modeling and analysis is important for crowd scene understanding and has various applications in video surveillance. Stationary crowd groups are a key factor influencing pedestrian walking patterns but was mostly ignored in the literature. It plays different roles for different pedestrians in a crowded scene and can change over time. In this paper, a novel model is proposed to model pedestrian behaviors by incorporating stationary crowd groups as a key component. Through inference on the interactions between stationary crowd groups and pedestrians, our model can be used to investigate pedestrian behaviors. The effectiveness of the proposed model is demonstrated through multiple applications, including walking path prediction, destination prediction, personality attribute classification, and abnormal event detection. To evaluate our model, two large pedestrian walking route datasets are built. The walking routes of around 15 000 pedestrians from two crowd surveillance videos are manually annotated. The datasets will be released to the public and benefit future research on pedestrian behavior analysis and crowd scene understanding.",
"title": ""
},
{
"docid": "df4477952bc78f9ddca6a637b0d9b990",
"text": "Food preference learning is an important component of wellness applications and restaurant recommender systems as it provides personalized information for effective food targeting and suggestions. However, existing systems require some form of food journaling to create a historical record of an individual's meal selections. In addition, current interfaces for food or restaurant preference elicitation rely extensively on text-based descriptions and rating methods, which can impose high cognitive load, thereby hampering wide adoption.\n In this paper, we propose PlateClick, a novel system that bootstraps food preference using a simple, visual quiz-based user interface. We leverage a pairwise comparison approach with only visual content. Using over 10,028 recipes collected from Yummly, we design a deep convolutional neural network (CNN) to learn the similarity distance metric between food images. Our model is shown to outperform state-of-the-art CNN by 4 times in terms of mean Average Precision. We explore a novel online learning framework that is suitable for learning users' preferences across a large scale dataset based on a small number of interactions (≤ 15). Our online learning approach balances exploitation-exploration and takes advantage of food similarities using preference-propagation in locally connected graphs.\n We evaluated our system in a field study of 227 anonymous users. The results demonstrate that our method outperforms other baselines by a significant margin, and the learning process can be completed in less than one minute. In summary, PlateClick provides a light-weight, immersive user experience for efficient food preference elicitation.",
"title": ""
},
{
"docid": "83728a9b746c7d3c3ea1e89ef01f9020",
"text": "This paper presents the design of the robot AILA, a mobile dual-arm robot system developed as a research platform for investigating aspects of the currently booming multidisciplinary area of mobile manipulation. The robot integrates and allows in a single platform to perform research in most of the areas involved in autonomous robotics: navigation, mobile and dual-arm manipulation planning, active compliance and force control strategies, object recognition, scene representation, and semantic perception. AILA has 32 degrees of freedom, including 7-DOF arms, 4-DOF torso, 2-DOF head, and a mobile base equipped with six wheels, each of them with two degrees of freedom. The primary design goal was to achieve a lightweight arm construction with a payload-to-weight ratio greater than one. Besides, an adjustable body should sustain the dual-arm system providing an extended workspace. In addition, mobility is provided by means of a wheel-based mobile base. As a result, AILA's arms can lift 8kg and weigh 5.5kg, thus achieving a payload-to-weight ratio of 1.45. The paper will provide an overview of the design, especially in the mechatronics area, as well as of its realization, the sensors incorporated in the system, and its control software.",
"title": ""
},
{
"docid": "b6c9844bdad60c5373cac2bcd018d899",
"text": "Cloud computing is currently gaining enormous momentum due to a number of promised benefits: ease of use in terms of deployment, administration, and maintenance, along with high scalability and flexibility to create new services. However, as more personal and business applications migrate to the cloud, service quality will become an important differentiator between providers. In particular, quality of experience as perceived by users has the potential to become the guiding paradigm for managing quality in the cloud. In this article, we discuss technical challenges emerging from shifting services to the cloud, as well as how this shift impacts QoE and QoE management. Thereby, a particular focus is on multimedia cloud applications. Together with a novel QoE-based classification scheme of cloud applications, these challenges drive the research agenda on QoE management for cloud applications.",
"title": ""
},
{
"docid": "8b8ec88419baa23e29d2ec336e8805c6",
"text": "Short-term passenger demand forecasting is of great importance to the ondemand ride service platform, which can incentivize vacant cars moving from over-supply regions to over-demand regions. The spatial dependences, temporal dependences, and exogenous dependences need to be considered simultaneously, however, which makes short-term passenger demand forecasting challenging. We propose a novel deep learning (DL) approach, named the fusion convolutional long short-term memory network (FCL-Net), to address these three dependences within one end-to-end learning architecture. The model is stacked and fused by multiple convolutional long short-term memory (LSTM) layers, standard LSTM layers, and convolutional layers. The fusion of convolutional techniques and the LSTM network enables the proposed DL approach to better capture the spatiotemporal characteristics and correlations of explanatory variables. A tailored spatially aggregated random forest is employed to rank the importance of the explanatory variables. The ranking is then used for feature selection. The proposed DL approach is applied to the short-term forecasting of passenger demand under an on-demand ride service platform in Hangzhou, China. Experimental results, validated on real-world data provided by DiDi Chuxing, show that the FCL-Net achieves better predictive performance than traditional approaches in∗Corresponding author Email address: [email protected] (Xiqun (Michael) Chen) Preprint submitted to Elsevier June 21, 2017 ar X iv :1 70 6. 06 27 9v 1 [ cs .A I] 2 0 Ju n 20 17 cluding both classical time-series prediction models and neural network based algorithms (e.g., artificial neural network and LSTM). Furthermore, the consideration of exogenous variables in addition to passenger demand itself, such as the travel time rate, time-of-day, day-of-week, and weather conditions, is proven to be promising, since it reduces the root mean squared error (RMSE) by 50.9%. It is also interesting to find that the feature selection reduces 30% in the dimension of predictors and leads to only 0.6% loss in the forecasting accuracy measured by RMSE in the proposed model. This paper is one of the first DL studies to forecast the short-term passenger demand of an on-demand ride service platform by examining the spatio-temporal correlations.",
"title": ""
},
{
"docid": "c2a2e9903859a6a9f9b3db5696cb37ff",
"text": "Depth estimation from a single image is a fundamental problem in computer vision. In this paper, we propose a simple yet effective convolutional spatial propagation network (CSPN) to learn the affinity matrix for depth prediction. Specifically, we adopt an efficient linear propagation model, where the propagation is performed with a manner of recurrent convolutional operation, and the affinity among neighboring pixels is learned through a deep convolutional neural network (CNN). We apply the designed CSPN to two depth estimation tasks given a single image: (1) Refine the depth output from existing state-of-the-art (SOTA) methods; (2) Convert sparse depth samples to a dense depth map by embedding the depth samples within the propagation procedure. The second task is inspired by the availability of LiDAR that provides sparse but accurate depth measurements. We experimented the proposed CSPN over the popular NYU v2 [1] and KITTI [2] datasets, where we show that our proposed approach improves not only quality (e.g., 30% more reduction in depth error), but also speed (e.g., 2 to 5× faster) of depth maps than previous SOTA methods.",
"title": ""
},
{
"docid": "d8ec0c507217500a97c1664c33b2fe72",
"text": "To realize ideal force control of robots that interact with a human, a very precise actuating system with zero impedance is desired. For such applications, a rotary series elastic actuator (RSEA) has been introduced recently. This paper presents the design of RSEA and the associated control algorithms. To generate joint torque as desired, a torsional spring is installed between a motor and a human joint, and the motor is controlled to produce a proper spring deflection for torque generation. When the desired torque is zero, the motor must follow the human joint motion, which requires that the friction and the inertia of the motor be compensated. The human joint and the body part impose the load on the RSEA. They interact with uncertain environments and their physical properties vary with time. In this paper, the disturbance observer (DOB) method is applied to make the RSEA precisely generate the desired torque under such time-varying conditions. Based on the nominal model preserved by the DOB, feedback and feedforward controllers are optimally designed for the desired performance, i.e., the RSEA: (1) exhibits very low impedance and (2) generates the desired torque precisely while interacting with a human. The effectiveness of the proposed design is verified by experiments.",
"title": ""
}
] |
scidocsrr
|
fa05059ee4caed8a9d565fc0ec0d0b5b
|
Context-Sensitive Twitter Sentiment Classification Using Neural Network
|
[
{
"docid": "e95d41b322dccf7f791ed88a9f2ccced",
"text": "Most of the recent literature on Sentiment Analysis over Twitter is tied to the idea that the sentiment is a function of an incoming tweet. However, tweets are filtered through streams of posts, so that a wider context, e.g. a topic, is always available. In this work, the contribution of this contextual information is investigated. We modeled the polarity detection problem as a sequential classification task over streams of tweets. A Markovian formulation of the Support Vector Machine discriminative model as embodied by the SVMhmm algorithm has been here employed to assign the sentiment polarity to entire sequences. The experimental evaluation proves that sequential tagging effectively embodies evidence about the contexts and is able to reach a relative increment in detection accuracy of around 20% in F1 measure. These results are particularly interesting as the approach is flexible and does not require manually coded resources.",
"title": ""
}
] |
[
{
"docid": "2c18433b18421cd9e0f28605809a8665",
"text": "Cross-domain knowledge bases such as DBpedia, YAGO, or the Google Knowledge Graph have gained increasing attention over the last years and are starting to be deployed within various use cases. However, the content of such knowledge bases is far from being complete, far from always being correct, and suffers from deprecation (i.e. population numbers become outdated after some time). Hence, there are efforts to leverage various types of Web data to complement, update and extend such knowledge bases. A source of Web data that potentially provides a very wide coverage are millions of relational HTML tables that are found on the Web. The existing work on using data from Web tables to augment cross-domain knowledge bases reports only aggregated performance numbers. The actual content of the Web tables and the topical areas of the knowledge bases that can be complemented using the tables remain unclear. In this paper, we match a large, publicly available Web table corpus to the DBpedia knowledge base. Based on the matching results, we profile the potential of Web tables for augmenting different parts of cross-domain knowledge bases and report detailed statistics about classes, properties, and instances for which missing values can be filled using Web table data as evidence. In order to estimate the potential quality of the new values, we empirically examine the Local Closed World Assumption and use it to determine the maximal number of correct facts that an ideal data fusion strategy could generate. Using this as ground truth, we compare three data fusion strategies and conclude that knowledge-based trust outperforms PageRankand voting-based fusion.",
"title": ""
},
{
"docid": "fac131e435b5dfe9a7cd839b07bec139",
"text": "The past two decades have witnessed an explosion in the identification, largely by positional cloning, of genes associated with mendelian diseases. The roughly 1,200 genes that have been characterized have clarified our understanding of the molecular basis of human genetic disease. The principles derived from these successes should be applied now to strategies aimed at finding the considerably more elusive genes that underlie complex disease phenotypes. The distribution of types of mutation in mendelian disease genes argues for serious consideration of the early application of a genomic-scale sequence-based approach to association studies and against complete reliance on a positional cloning approach based on a map of anonymous single nucleotide polymorphism haplotypes.",
"title": ""
},
{
"docid": "6291caf1fae634c6e9ce8a22dab35cce",
"text": "Effective home energy management requires data on the current power consumption of devices in the home. Individually monitoring every appliance is costly and inconvenient. Non-Intrusive Load Monitoring (NILM) promises to provide individual electrical load information from aggregate power measurements. Application of NILM in residential settings has been constrained by the data provided by utility billing smart meters. Current utility billing smart meters do not deliver data that supports quantifying the harmonic content in the 60 Hz waveforms. Research in NILM has a critical need for a low-cost sensor system to collect energy data with fast sampling and significant precision to demonstrate actual data requirements. Implementation of cost-effective NILM in a residential consumer context requires real-time processing of this data to identify individual loads. This paper describes a system providing a powerful and flexible platform, supporting user configuration of sampling rates and amplitude resolution up to 65 kHz and up to 24 bits respectively. The internal processor is also capable of running NILM algorithms in real time on the sampled measurements. Using this prototype, real time load identification can be provided to the consumer for control, visualization, feedback, and demand response implications.",
"title": ""
},
{
"docid": "803a5dbedf309cec97d130438e687002",
"text": "Affective computing is a newly trend the main goal is exploring the human emotion things. The human emotion is leaded into a key position of behavior clue, and hence it should be included within the sensible model when an intelligent system aims to simulate or forecast human responses. This research utilizes decision tree one of data mining model to classify the emotion. This research integrates and manipulates the Thayer's emotion mode and color theory into the decision tree model, C4.5 for an innovative emotion detecting system. This paper uses 320 data in four emotion groups to train and build the decision tree for verifying the accuracy in this system. The result reveals that C4.5 decision tree model can be effective classified the emotion by feedback color from human. For the further research, colors will not the only human behavior clues, even more than all the factors from human interaction.",
"title": ""
},
{
"docid": "fba35e7409ab7bf9a760f4aeb007a77a",
"text": "Sparse code multiple access (SCMA) is a promising uplink multiple access technique that can achieve superior spectral efficiency, provided that multidimensional codebooks are carefully designed. In this letter, we investigate the multiuser codebook design for SCMA systems over Rayleigh fading channels. The criterion of the proposed design is derived from the cutoff rate analysis of the equivalent multiple-input multiple-output system. Furthermore, new codebooks with signal-space diversity are suggested, while simulations show that this criterion is efficient in developing codebooks with substantial performance improvement, compared with the existing ones.",
"title": ""
},
{
"docid": "b5cc41f689a1792b544ac66a82152993",
"text": "0020-7225/$ see front matter 2009 Elsevier Ltd doi:10.1016/j.ijengsci.2009.08.001 * Corresponding author. Tel.: +66 2 9869009x220 E-mail address: [email protected] (T. Leephakp Nowadays, Pneumatic Artificial Muscle (PAM) has become one of the most widely-used fluid-power actuators which yields remarkable muscle-like properties such as high force to weight ratio, soft and flexible structure, minimal compressed-air consumption and low cost. To obtain optimum design and usage, it is necessary to understand mechanical behaviors of the PAM. In this study, the proposed models are experimentally derived to describe mechanical behaviors of the PAMs. The experimental results show a non-linear relationship between contraction as well as air pressure within the PAMs and a pulling force of the PAMs. Three different sizes of PAMs available in industry are studied for empirical modeling and simulation. The case studies are presented to verify close agreement on the simulated results to the experimental results when the PAMs perform under various loads. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7cef2fac422d9fc3c3ffbc130831b522",
"text": "Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , \" (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated.",
"title": ""
},
{
"docid": "4adee6dc3dfc57c4180c4107e0af89a8",
"text": "Objective\nSocial media is an important pharmacovigilance data source for adverse drug reaction (ADR) identification. Human review of social media data is infeasible due to data quantity, thus natural language processing techniques are necessary. Social media includes informal vocabulary and irregular grammar, which challenge natural language processing methods. Our objective is to develop a scalable, deep-learning approach that exceeds state-of-the-art ADR detection performance in social media.\n\n\nMaterials and Methods\nWe developed a recurrent neural network (RNN) model that labels words in an input sequence with ADR membership tags. The only input features are word-embedding vectors, which can be formed through task-independent pretraining or during ADR detection training.\n\n\nResults\nOur best-performing RNN model used pretrained word embeddings created from a large, non-domain-specific Twitter dataset. It achieved an approximate match F-measure of 0.755 for ADR identification on the dataset, compared to 0.631 for a baseline lexicon system and 0.65 for the state-of-the-art conditional random field model. Feature analysis indicated that semantic information in pretrained word embeddings boosted sensitivity and, combined with contextual awareness captured in the RNN, precision.\n\n\nDiscussion\nOur model required no task-specific feature engineering, suggesting generalizability to additional sequence-labeling tasks. Learning curve analysis showed that our model reached optimal performance with fewer training examples than the other models.\n\n\nConclusion\nADR detection performance in social media is significantly improved by using a contextually aware model and word embeddings formed from large, unlabeled datasets. The approach reduces manual data-labeling requirements and is scalable to large social media datasets.",
"title": ""
},
{
"docid": "9b5877847bedecd73a8c2f0d6f832641",
"text": "Traditional, more biochemically motivated approaches to chemical design and drug discovery are notoriously complex and costly processes. The space of all synthesizable molecules is far too large to exhaustively search any meaningful subset for interesting novel drug and molecule proposals, and the lack of any particularly informative and manageable structure to this search space makes the very task of defining interesting subsets a difficult problem in itself. Recent years have seen the proposal and rapid development of alternative, machine learning-based methods for vastly simplifying the search problem specified in chemical design and drug discovery. In this work, I build upon this existing literature exploring the possibility of automatic chemical design and propose a novel generative model for producing a diverse set of valid new molecules. The proposed molecular graph variational autoencoder model achieves comparable performance across standard metrics to the state-of-the-art in this problem area and is capable of regularly generating valid molecule proposals similar but distinctly different from known sets of interesting molecules. While an interesting result in terms of addressing one of the core issues with machine learning-based approaches to automatic chemical design, further research in this direction should aim to optimize for more biochemically motivated objectives and be more informed by the ultimate utility of such models to the biochemical field.",
"title": ""
},
{
"docid": "a54f912c14b44fc458ed8de9e19a5e82",
"text": "Musical training has recently gained additional interest in education as increasing neuroscientific research demonstrates its positive effects on brain development. Neuroimaging revealed plastic changes in the brains of adult musicians but it is still unclear to what extent they are the product of intensive music training rather than of other factors, such as preexisting biological markers of musicality. In this review, we synthesize a large body of studies demonstrating that benefits of musical training extend beyond the skills it directly aims to train and last well into adulthood. For example, children who undergo musical training have better verbal memory, second language pronunciation accuracy, reading ability and executive functions. Learning to play an instrument as a child may even predict academic performance and IQ in young adulthood. The degree of observed structural and functional adaptation in the brain correlates with intensity and duration of practice. Importantly, the effects on cognitive development depend on the timing of musical initiation due to sensitive periods during development, as well as on several other modulating variables. Notably, we point to motivation, reward and social context of musical education, which are important yet neglected factors affecting the long-term benefits of musical training. Further, we introduce the notion of rhythmic entrainment and suggest that it may represent a mechanism supporting learning and development of executive functions. It also hones temporal processing and orienting of attention in time that may underlie enhancements observed in reading and verbal memory. We conclude that musical training uniquely engenders near and far transfer effects, preparing a foundation for a range of skills, and thus fostering cognitive development.",
"title": ""
},
{
"docid": "7ef86793639ce209fa168f4368854b5e",
"text": "In this paper, we compare learning techniques based on statistical classification to traditional methods of relevance feedback for the document routing problem. We consider three classification techniques which have decision rules that are derived via explicit error minimization linear discriminant analysis, logistic regression, and neuraf networks. We demonstrate that the classifiers perform 1015% better than relevance feedback via Rocchio expansion for the TREC-2 and TREC-3 routing tasks. Error minimization is difficult in high-dimensional feature spaces because the convergence process is slow and the models ~e prone to overfitting. We use two different strategies, latent semantic indexing and optimaJ term selection, to reduce the number of features. Our results indicate that features based on latent semantic indexing are more effective for techniques such as linear discriminant anafysis and logistic regression, which have no way to protect against overfitting. Neural networks perform equally well with either set of features and can take advantage of the additional information available when both feature sets are used as input.",
"title": ""
},
{
"docid": "5b3ba0fc32229e78cfde49716ce909bd",
"text": "Previous work by Lin et al. (2011) demonstrated the effectiveness of using discourse relations for evaluating text coherence. However, their work was based on discourse relations annotated in accordance with the Penn Discourse Treebank (PDTB) (Prasad et al., 2008), which encodes only very shallow discourse structures; therefore, they cannot capture long-distance discourse dependencies. In this paper, we study the impact of deep discourse structures for the task of coherence evaluation, using two approaches: (1) We compare a model with features derived from discourse relations in the style of Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), which annotate the full hierarchical discourse structure, against our re-implementation of Lin et al.’s model; (2) We compare a model encoded using only shallow RST-style discourse relations, against the one encoded using the complete set of RST-style discourse relations. With an evaluation on two tasks, we show that deep discourse structures are truly useful for better differentiation of text coherence, and in general, RST-style encoding is more powerful than PDTBstyle encoding in these settings.",
"title": ""
},
{
"docid": "f9ee82dcf1cce6d41a7f106436ee3a7d",
"text": "The Automatic Identification System (AIS) is based on VHF radio transmissions of ships' identity, position, speed and heading, in addition to other key parameters. In 2004, the Norwegian Defence Research Establishment (FFI) undertook studies to evaluate if the AIS signals could be detected in low Earth orbit. Since then, the interest in Space-Based AIS reception has grown significantly, and both public and private sector organizations have established programs to study the issue, and demonstrate such a capability in orbit. FFI is conducting two such programs. The objective of the first program was to launch a nano-satellite equipped with an AIS receiver into a near polar orbit, to demonstrate Space-Based AIS reception at high latitudes. The satellite was launched from India 12th July 2010. Even though the satellite has not finished commissioning, the receiver is operated with real-time transmission of received AIS data to the Norwegian Coastal Administration. The second program is an ESA-funded project to operate an AIS receiver on the European Columbus module of the International Space Station. Mounting of the equipment, the NORAIS receiver, was completed in April 2010. Currently, the AIS receiver has operated for more than three months, picking up several million AIS messages from more than 60 000 ship identities. In this paper, we will present experience gained with the space-based AIS systems, highlight aspects of tracking ships throughout their voyage, and comment on possible contributions to port security.",
"title": ""
},
{
"docid": "53b6315bfb8fcfef651dd83138b11378",
"text": "We illustrate the correspondence between uncertainty sets in robust optimization and some popular risk measures in finance, and show how robust optimization can be used to generalize the concepts of these risk measures. We also show that by using properly defined uncertainty sets in robust optimization models, one can construct coherent risk measures. Our results have implications for efficient portfolio optimization under different measures of risk. Department of Mathematics, National University of Singapore, Singapore 117543. Email: [email protected]. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS startup grants R-146-050-070-133 & R146-050-070-101. Division of Mathematics and Sciences, Babson College, Babson Park, MA 02457, USA. E-mail: [email protected]. Research supported by the Gill grant from the Babson College Board of Research. NUS Business School, National University of Singapore. Email: [email protected]. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS academic research grant R-314-000-066-122 and R-314-000-068-122.",
"title": ""
},
{
"docid": "fb484e0b6b5e82984a3e1176dfae8d4c",
"text": "In this paper, we describe how we are using text mining solutions used to enhance the production of systematic reviews. This collaborative project also serves as a proof of concept and as a testbed for deriving requirements for the development of more generally applicable text mining tools and services.",
"title": ""
},
{
"docid": "4c951d6be8b49c9931492b5f89009fb3",
"text": "Tooth preparations for fixed prosthetic restorations can be done in different ways, basically of two kinds: preparation with a defined margin and the so-called vertical preparation or feather edge. The latter was originally used for prosthetics on teeth treated with resective surgery for periodontal disease. In this article, the author presents a prosthetic technique for periodontally healthy teeth using feather edge preparation in a flapless approach in both esthetic and posterior areas with ceramometal and zirconia restorations, achieving high quality clinical and esthetic results in terms of soft tissue stability at the prosthetic/tissue interface, both in the short and in the long term (clinical follow-up up to fifteen years). Moreover, the BOPT technique, if compared to other preparation techniques (chamfer, shoulder, etc), is simpler and faster when in preparation impression taking, temporary crowns' relining and creating the crowns' profiles up to the final prosthetic restoration.",
"title": ""
},
{
"docid": "fd568ae231543517bd660d37c0b71570",
"text": "Chemical and electrical interaction within and between cells is well established. Just the opposite is true about cellular interactions via other physical fields. The most probable candidate for an other form of cellular interaction is the electromagnetic field. We review theories and experiments on how cells can generate and detect electromagnetic fields generally, and if the cell-generated electromagnetic field can mediate cellular interactions. We do not limit here ourselves to specialized electro-excitable cells. Rather we describe physical processes that are of a more general nature and probably present in almost every type of living cell. The spectral range included is broad; from kHz to the visible part of the electromagnetic spectrum. We show that there is a rather large number of theories on how cells can generate and detect electromagnetic fields and discuss experimental evidence on electromagnetic cellular interactions in the modern scientific literature. Although small, it is continuously accumulating.",
"title": ""
},
{
"docid": "d9176322068e6ca207ae913b1164b3da",
"text": "Topic Detection and Tracking (TDT) is a variant of classiication in which the classes are not known or xed in advance. Consider for example an incoming stream of news articles or email messages that are to be classiied by topic; new classes must be created as new topics arise. The problem is a challenging one for machine learning. Instances of new topics must be recognized as not belonging to any of the existing classes (detection), and instances of old topics must be correctly classiied (tracking)|often with extremely little training data per class. This paper proposes a new approach to TDT based on probabilis-tic, generative models. Strong statistical techniques are used to address the many challenges: hierarchical shrinkage for sparse data, statistical \\garbage collection\" for new event detection, clustering in time to separate the diierent events of a common topic, and deterministic anneal-ing for creating the hierarchy. Preliminary experimental results show promise.",
"title": ""
},
{
"docid": "7dbb697a8793027d8aa55202989cb99e",
"text": "We consider the problem of finding the minimizer of a function f : R → R of the finite-sum form min f(w) = 1/n ∑n i fi(w). This problem has been studied intensively in recent years in the field of machine learning (ML). One promising approach for large-scale data is to use a stochastic optimization algorithm to solve the problem. SGDLibrary is a readable, flexible and extensible pure-MATLAB library of a collection of stochastic optimization algorithms. The purpose of the library is to provide researchers and implementers a comprehensive evaluation environment for the use of these algorithms on various ML problems. Published in Journal of Machine Learning Research (JMLR) entitled “SGDLibrary: A MATLAB library for stochastic gradient optimization algorithms” [1]",
"title": ""
},
{
"docid": "f840350d14a99f3da40729cfe6d56ef5",
"text": "This paper presents a sub-radix-2 redundant architecture to improve the performance of switched-capacitor successive-approximation-register (SAR) analog-to-digital converters (ADCs). The redundancy not only guarantees digitally correctable static nonlinearities of the converter, it also offers means to combat dynamic errors in the conversion process, and thus, accelerating the speed of the SAR architecture. A perturbation-based digital calibration technique is also described that closely couples with the architecture choice to accomplish simultaneous identification of multiple capacitor mismatch errors of the ADC, enabling the downsizing of all sampling capacitors to save power and silicon area. A 12-bit prototype measured a Nyquist 70.1-dB signal-to-noise-plus-distortion ratio (SNDR) and a Nyquist 90.3-dB spurious free dynamic range (SFDR) at 22.5 MS/s, while dissipating 3.0-mW power from a 1.2-V supply and occupying 0.06-mm2 silicon area in a 0.13-μm CMOS process. The figure of merit (FoM) of this ADC is 51.3 fJ/step measured at 22.5 MS/s and 36.7 fJ/step at 45 MS/s.",
"title": ""
}
] |
scidocsrr
|
1d98b8644cdf9a4d8002019c30e054a1
|
Short text classification by detecting information path
|
[
{
"docid": "fe3029a9e54f068a1387014778c1128d",
"text": "We propose a simple, scalable, and non-parametric approach for short text classification. Leveraging the well studied and scalable Information Retrieval (IR) framework, our approach mimics human labeling process for a piece of short text. It first selects the most representative and topical-indicative words from a given short text as query words, and then searches for a small set of labeled short texts best matching the query words. The predicted category label is the majority vote of the search results. Evaluated on a collection of more than 12K Web snippets, the proposed approach achieves comparable classification accuracy with the baseline Maximum Entropy classifier using as few as 3 query words and top-5 best matching search hits. Among the four query word selection schemes proposed and evaluated in our experiments, term frequency together with clarity gives the best classification accuracy.",
"title": ""
},
{
"docid": "95689f439fababe920921ee419965b90",
"text": "In traditional text clustering methods, documents are represented as \"bags of words\" without considering the semantic information of each document. For instance, if two documents use different collections of core words to represent the same topic, they may be falsely assigned to different clusters due to the lack of shared core words, although the core words they use are probably synonyms or semantically associated in other forms. The most common way to solve this problem is to enrich document representation with the background knowledge in an ontology. There are two major issues for this approach: (1) the coverage of the ontology is limited, even for WordNet or Mesh, (2) using ontology terms as replacement or additional features may cause information loss, or introduce noise. In this paper, we present a novel text clustering method to address these two issues by enriching document representation with Wikipedia concept and category information. We develop two approaches, exact match and relatedness-match, to map text documents to Wikipedia concepts, and further to Wikipedia categories. Then the text documents are clustered based on a similarity metric which combines document content information, concept information as well as category information. The experimental results using the proposed clustering framework on three datasets (20-newsgroup, TDT2, and LA Times) show that clustering performance improves significantly by enriching document representation with Wikipedia concepts and categories.",
"title": ""
},
{
"docid": "639bbe7b640c514ab405601c7c3cfa01",
"text": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.",
"title": ""
},
{
"docid": "e59d1a3936f880233001eb086032d927",
"text": "In microblogging services such as Twitter, the users may become overwhelmed by the raw data. One solution to this problem is the classification of short text messages. As short texts do not provide sufficient word occurrences, traditional classification methods such as \"Bag-Of-Words\" have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages.",
"title": ""
},
{
"docid": "3bee61e95acf274c01f1846233b3c3bb",
"text": "One key difficulty with text classification learning algorithms is that they require many hand-labeled examples to learn accurately. This dissertation demonstrates that supervised learning algorithms that use a small number of labeled examples and many inexpensive unlabeled examples can create high-accuracy text classifiers. By assuming that documents are created by a parametric generative model, Expectation-Maximization (EM) finds local maximum a posteriori models and classifiers from all the data—labeled and unlabeled. These generative models do not capture all the intricacies of text; however on some domains this technique substantially improves classification accuracy, especially when labeled data are sparse. Two problems arise from this basic approach. First, unlabeled data can hurt performance in domains where the generative modeling assumptions are too strongly violated. In this case the assumptions can be made more representative in two ways: by modeling sub-topic class structure, and by modeling super-topic hierarchical class relationships. By doing so, model probability and classification accuracy come into correspondence, allowing unlabeled data to improve classification performance. The second problem is that even with a representative model, the improvements given by unlabeled data do not sufficiently compensate for a paucity of labeled data. Here, limited labeled data provide EM initializations that lead to low-probability models. Performance can be significantly improved by using active learning to select high-quality initializations, and by using alternatives to EM that avoid low-probability local maxima.",
"title": ""
}
] |
[
{
"docid": "9948ebbd2253021e3af53534619c5094",
"text": "This paper presents a novel method to simultaneously estimate the clothed and naked 3D shapes of a person. The method needs only a single photograph of a person wearing clothing. Firstly, we learn a deformable model of human clothed body shapes from a database. Then, given an input image, the deformable model is initialized with a few user-specified 2D joints and contours of the person. And the correspondence between 3D shape and 2D contours is established automatically. Finally, we optimize the parameters of the deformable model in an iterative way, and then obtain the clothed and naked 3D shapes of the person simultaneously. The experimental results on real images demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "629f6ab006700e5bc6b5a001a4d925e5",
"text": "Model predictive control (MPC) is an effective method for controlling robotic systems, particularly autonomous aerial vehicles such as quadcopters. However, application of MPC can be computationally demanding, and typically requires estimating the state of the system, which can be challenging in complex, unstructured environments. Reinforcement learning can in principle forego the need for explicit state estimation and acquire a policy that directly maps sensor readings to actions, but is difficult to apply to unstable systems that are liable to fail catastrophically during training before an effective policy has been found. We propose to combine MPC with reinforcement learning in the framework of guided policy search, where MPC is used to generate data at training time, under full state observations provided by an instrumented training environment. This data is used to train a deep neural network policy, which is allowed to access only the raw observations from the vehicle's onboard sensors. After training, the neural network policy can successfully control the robot without knowledge of the full state, and at a fraction of the computational cost of MPC. We evaluate our method by learning obstacle avoidance policies for a simulated quadrotor, using simulated onboard sensors and no explicit state estimation at test time.",
"title": ""
},
{
"docid": "fc7efee1840ef385537f1686859da87c",
"text": "The self-oscillating converter is a popular circuit for cost-sensitive applications due to its simplicity and low component count. It is widely employed in mobile phone charges and as the stand-by power source in offline power supplies for data-processing equipment. However, this circuit almost was not explored for supplier Power LEDs. This paper presents a self-oscillating buck power electronics driver for supply directly Power LEDs, with no additional circuit. A simplified mathematical model of LED was used to characterize the self-oscillating converter for the power LED driver. In order to improve the performance of the proposed buck converter in this work the control of the light intensity of LEDs was done using a microcontroller to emulate PWM modulation with frequency 200 Hz. At using the converter proposed the effects of the LED manufacturing tolerances and drifts over temperature almost has no influence on the LED average current.",
"title": ""
},
{
"docid": "07ffe189312da8519c4a6260402a0b22",
"text": "Computational social science is an emerging research area at the intersection of computer science, statistics, and the social sciences, in which novel computational methods are used to answer questions about society. The field is inherently collaborative: social scientists provide vital context and insight into pertinent research questions, data sources, and acquisition methods, while statisticians and computer scientists contribute expertise in developing mathematical models and computational tools. New, large-scale sources of demographic, behavioral, and network data from the Internet, sensor networks, and crowdsourcing systems augment more traditional data sources to form the heart of this nascent discipline, along with recent advances in machine learning, statistics, social network analysis, and natural language processing. The related research area of social computing deals with the mechanisms through which people interact with computational systems, examining questions such as how and why people contribute user-generated content and how to design systems that better enable them to do so. Examples of social computing systems include prediction markets, crowdsourcing markets, product review sites, and collaboratively edited wikis, all of which encapsulate some notion of aggregating crowd wisdom, beliefs, or ideas—albeit in different ways. Like computational social science, social computing blends techniques from machine learning and statistics with ideas from the social sciences. For example, the economics literature on incentive design has been especially influential.",
"title": ""
},
{
"docid": "53e6216c2ad088dfcf902cc0566072c6",
"text": "The floating photovoltaic system is a new concept in energy technology to meet the needs of our time. The system integrates existing land based photovoltaic technology with a newly developed floating photovoltaic technology. Because module temperature of floating PV system is lower than that of overland PV system, the floating PV system has 11% better generation efficiency than overland PV system. In the thesis, superiority of floating PV system is verified through comparison analysis of generation amount by 2.4kW, 100kW and 500kW floating PV system installed by K-water and the cause of such superiority was analyzed. Also, effect of wind speed, and waves on floating PV system structure was measured to analyze the effect of the environment on floating PV system generation efficiency.",
"title": ""
},
{
"docid": "f0e22717207ed3bc013d09db3edc337c",
"text": "The bag-of-words model is one of the most popular representation methods for object categorization. The key idea is to quantize each extracted key point into one of visual words, and then represent each image by a histogram of the visual words. For this purpose, a clustering algorithm (e.g., K-means), is generally used for generating the visual words. Although a number of studies have shown encouraging results of the bag-of-words representation for object categorization, theoretical studies on properties of the bag-of-words model is almost untouched, possibly due to the difficulty introduced by using a heuristic clustering process. In this paper, we present a statistical framework which generalizes the bag-of-words representation. In this framework, the visual words are generated by a statistical process rather than using a clustering algorithm, while the empirical performance is competitive to clustering-based method. A theoretical analysis based on statistical consistency is presented for the proposed framework. Moreover, based on the framework we developed two algorithms which do not rely on clustering, while achieving competitive performance in object categorization when compared to clustering-based bag-of-words representations.",
"title": ""
},
{
"docid": "eff407fb0d45ebeea3d5965b7b5df14b",
"text": "In order to develop intelligent systems that attain the trust of their users, it is important to understand how users perceive such systems and develop those perceptions over time. We present an investigation into how users come to understand an intelligent system as they use it in their daily work. During a six-week field study, we interviewed eight office workers regarding the operation of a system that predicted their managers' interruptibility, comparing their mental models to the actual system model. Our results show that by the end of the study, participants were able to discount some of their initial misconceptions about what information the system used for reasoning about interruptibility. However, the overarching structures of their mental models stayed relatively stable over the course of the study. Lastly, we found that participants were able to give lay descriptions attributing simple machine learning concepts to the system despite their lack of technical knowledge. Our findings suggest an appropriate level of feedback for user interfaces of intelligent systems, provide a baseline level of complexity for user understanding, and highlight the challenges of making users aware of sensed inputs for such systems.",
"title": ""
},
{
"docid": "64c2b9f59a77f03e6633e5804356e9fc",
"text": "AbstructWe present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (Le., two extra disks) is based on ReedSolomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error.",
"title": ""
},
{
"docid": "4277894ef2bf88fd3a78063a8b0cc7fe",
"text": "This paper deals with a design method of LCL filter for grid-connected three-phase PWM voltage source inverters (VSI). By analyzing the total harmonic distortion of the current (THDi) in the inverter-side inductor and the ripple attenuation factor of the current (RAF) injected to the grid through the LCL network, the parameter of LCL can be clearly designed. The described LCL filter design method is verified by showing a good agreement between the target current THD and the actual one through simulation and experiment.",
"title": ""
},
{
"docid": "969ba9848fa6d02f74dabbce2f1fe3ab",
"text": "With the rapid growth of social media, massive misinformation is also spreading widely on social media, e.g., Weibo and Twitter, and brings negative effects to human life. Today, automatic misinformation identification has drawn attention from academic and industrial communities. Whereas an event on social media usually consists of multiple microblogs, current methods are mainly constructed based on global statistical features. However, information on social media is full of noise, which should be alleviated. Moreover, most of the microblogs about an event have little contribution to the identification of misinformation, where useful information can be easily overwhelmed by useless information. Thus, it is important to mine significant microblogs for constructing a reliable misinformation identification method. In this article, we propose an attention-based approach for identification of misinformation (AIM). Based on the attention mechanism, AIM can select microblogs with the largest attention values for misinformation identification. The attention mechanism in AIM contains two parts: content attention and dynamic attention. Content attention is the calculated-based textual features of each microblog. Dynamic attention is related to the time interval between the posting time of a microblog and the beginning of the event. To evaluate AIM, we conduct a series of experiments on the Weibo and Twitter datasets, and the experimental results show that the proposed AIM model outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "a91a57326a2d961e24d13b844a3556cf",
"text": "This paper describes an interactive and adaptive streaming architecture that exploits temporal concatenation of H.264/AVC video bit-streams to dynamically adapt to both user commands and network conditions. The architecture has been designed to improve the viewing experience when accessing video content through individual and potentially bandwidth constrained connections. On the one hand, the user commands typically gives the client the opportunity to select interactively a preferred version among the multiple video clips that are made available to render the scene, e.g. using different view angles, or zoomed-in and slowmotion factors. On the other hand, the adaptation to the network bandwidth ensures effective management of the client buffer, which appears to be fundamental to reduce the client-server interaction latency, while maximizing video quality and preventing buffer underflow. In addition to user interaction and network adaptation, the deployment of fully autonomous infrastructures for interactive content distribution also requires the development of automatic versioning methods. Hence, the paper also surveys a number of approaches proposed for this purpose in surveillance and sport event contexts. Both objective metrics and subjective experiments are exploited to assess our system.",
"title": ""
},
{
"docid": "3d20ba5dc32270cb75df7a2d499a70e4",
"text": "The Maximum Margin Planning (MMP) (Ratliff et al., 2006) algorithm solves imitation learning problems by learning linear mappings from features to cost functions in a planning domain. The learned policy is the result of minimum-cost planning using these cost functions. These mappings are chosen so that example policies (or trajectories) given by a teacher appear to be lower cost (with a lossscaled margin) than any other policy for a given planning domain. We provide a novel approach, MMPBOOST , based on the functional gradient descent view of boosting (Mason et al., 1999; Friedman, 1999a) that extends MMP by “boosting” in new features. This approach uses simple binary classification or regression to improve performance of MMP imitation learning, and naturally extends to the class of structured maximum margin prediction problems. (Taskar et al., 2005) Our technique is applied to navigation and planning problems for outdoor mobile robots and robotic legged locomotion.",
"title": ""
},
{
"docid": "1d5e363647bd8018b14abfcc426246bb",
"text": "This paper presents a new approach to improve the performance of finger-vein identification systems presented in the literature. The proposed system simultaneously acquires the finger-vein and low-resolution fingerprint images and combines these two evidences using a novel score-level combination strategy. We examine the previously proposed finger-vein identification approaches and develop a new approach that illustrates it superiority over prior published efforts. The utility of low-resolution fingerprint images acquired from a webcam is examined to ascertain the matching performance from such images. We develop and investigate two new score-level combinations, i.e., holistic and nonlinear fusion, and comparatively evaluate them with more popular score-level fusion approaches to ascertain their effectiveness in the proposed system. The rigorous experimental results presented on the database of 6264 images from 156 subjects illustrate significant improvement in the performance, i.e., both from the authentication and recognition experiments.",
"title": ""
},
{
"docid": "5a7e97c755e29a9a3c82fc3450f9a929",
"text": "Intel Software Guard Extensions (SGX) is a hardware-based Trusted Execution Environment (TEE) that enables secure execution of a program in an isolated environment, called an enclave. SGX hardware protects the running enclave against malicious software, including the operating system, hypervisor, and even low-level firmware. This strong security property allows trustworthy execution of programs in hostile environments, such as a public cloud, without trusting anyone (e.g., a cloud provider) between the enclave and the SGX hardware. However, recent studies have demonstrated that enclave programs are vulnerable to accurate controlled-channel attacks conducted by a malicious OS. Since enclaves rely on the underlying OS, curious and potentially malicious OSs can observe a sequence of accessed addresses by intentionally triggering page faults. In this paper, we propose T-SGX, a complete mitigation solution to the controlled-channel attack in terms of compatibility, performance, and ease of use. T-SGX relies on a commodity component of the Intel processor (since Haswell), called Transactional Synchronization Extensions (TSX), which implements a restricted form of hardware transactional memory. As TSX is implemented as an extension (i.e., snooping the cache protocol), any unusual event, such as an exception or interrupt, that should be handled in its core component, results in an abort of the ongoing transaction. One interesting property is that the TSX abort suppresses the notification of errors to the underlying OS. This means that the OS cannot know whether a page fault has occurred during the transaction. T-SGX, by utilizing this property of TSX, can carefully isolate the effect of attempts to tap running enclaves, thereby completely eradicating the known controlledchannel attack. We have implemented T-SGX as a compiler-level scheme to automatically transform a normal enclave program into a secured enclave program without requiring manual source code modification or annotation. We not only evaluate the security properties of T-SGX, but also demonstrate that it could be applied to all the previously demonstrated attack targets, such as libjpeg, Hunspell, and FreeType. To evaluate the performance of T-SGX, we ported 10 benchmark programs of nbench to the SGX environment. Our evaluation results look promising. T-SGX is † The two lead authors contributed equally to this work. ⋆ The author did part of this work during an intership at Microsoft Research. an order of magnitude faster than the state-of-the-art mitigation schemes. On our benchmarks, T-SGX incurs on average 50% performance overhead and less than 30% storage overhead.",
"title": ""
},
{
"docid": "394fcbcb013951dbc01fdbc713ac6e62",
"text": "We present an approach to text simplification based on synchronous dependency grammars. The higher level of abstraction afforded by dependency representations allows for a linguistically sound treatment of complex constructs requiring reordering and morphological change, such as conversion of passive voice to active. We present a synchronous grammar formalism in which it is easy to write rules by hand and also acquire them automatically from dependency parses of aligned English and Simple English sentences. The grammar formalism is optimised for monolingual translation in that it reuses ordering information from the source sentence where appropriate. We demonstrate the superiority of our approach over a leading contemporary system based on quasi-synchronous tree substitution grammars, both in terms of expressivity and performance.",
"title": ""
},
{
"docid": "b7046fb7619949b9a03823450c19a8d5",
"text": "We introduce a model that learns active learning algorithms via metalearning. For a distribution of related tasks, our model jointly learns: a data representation, an item selection heuristic, and a prediction function. Our model uses the item selection heuristic to construct a labeled support set for training the prediction function. Using the Omniglot and MovieLens datasets, we test our model in synthetic and practical settings.",
"title": ""
},
{
"docid": "93df255e39f57dd2167191da4b90540f",
"text": "OBJECTIVE\nParkinsonian patients have abnormal oscillatory activity within the basal ganglia-thalamocortical circuitry. Particularly, excessive beta band oscillations are thought to be associated with akinesia. We studied whether cortical spontaneous activity is modified by deep brain stimulation (DBS) in advanced Parkinson's disease and if the modifications are related to the clinical symptoms.\n\n\nMETHODS\nWe studied the effects of bilateral electrical stimulation of subthalamic nucleus (STN) on cortical spontaneous activity by magnetoencephalography (MEG) in 11 Parkinsonian patients. The artifacts produced by DBS were suppressed by tSSS algorithm.\n\n\nRESULTS\nDuring DBS, UPDRS (Unified Parkinson's Disease Rating Scale) rigidity scores correlated with 6-10 Hz and 12-20 Hz somatomotor source strengths when eyes were open. When DBS was off UPDRS action tremor scores correlated with pericentral 6-10 Hz and 21-30 Hz and occipital alpha source strengths when eyes open. Occipital alpha strength decreased during DBS when eyes closed. The peak frequency of occipital alpha rhythm correlated negatively with total UPDRS motor scores and with rigidity subscores, when eyes closed.\n\n\nCONCLUSION\nSTN DBS modulates brain oscillations both in alpha and beta bands and these oscillations reflect the clinical condition during DBS.\n\n\nSIGNIFICANCE\nMEG combined with an appropriate artifact rejection method enables studies of DBS effects in Parkinson's disease and presumably also in the other emerging DBS indications.",
"title": ""
},
{
"docid": "02e0514dc8b7bfa65b55a0e8969dd0ad",
"text": "A detailed comparison was made of two methods for assessing the features of eating disorders. An investigator-based interview was compared with a self-report questionnaire based directly on that interview. A number of important discrepancies emerged. Although the two measures performed similarly with respect to the assessment of unambiguous behavioral features such as self-induced vomiting and dieting, the self-report questionnaire generated higher scores than the interview when assessing more complex features such as binge eating and concerns about shape. Both methods underestimated body weight.",
"title": ""
},
{
"docid": "911ea52fa57524e002154e2fe276ac44",
"text": "Many current natural language processing applications for social media rely on representation learning and utilize pre-trained word embeddings. There currently exist several publicly-available, pre-trained sets of word embeddings, but they contain few or no emoji representations even as emoji usage in social media has increased. In this paper we release emoji2vec, pre-trained embeddings for all Unicode emoji which are learned from their description in the Unicode emoji standard.1 The resulting emoji embeddings can be readily used in downstream social natural language processing applications alongside word2vec. We demonstrate, for the downstream task of sentiment analysis, that emoji embeddings learned from short descriptions outperforms a skip-gram model trained on a large collection of tweets, while avoiding the need for contexts in which emoji need to appear frequently in order to estimate a representation.",
"title": ""
},
{
"docid": "e31ea6b8c4a5df049782b463abc602ea",
"text": "Nature plays a very important role to solve problems in a very effective and well-organized way. Few researchers are trying to create computational methods that can assist human to solve difficult problems. Nature inspired techniques like swarm intelligence, bio-inspired, physics/chemistry and many more have helped in solving difficult problems and also provide most favourable solution. Nature inspired techniques are wellmatched for soft computing application because parallel, dynamic and self organising behaviour. These algorithms motivated from the working group of social agents like ants, bees and insect. This paper is a complete survey of nature inspired techniques.",
"title": ""
}
] |
scidocsrr
|
98f15dcee44b3b0014a0dc70c2ba6fca
|
Survey on distance metric learning and dimensionality reduction in data mining
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "effa64c878add2a55a804415cb7c8169",
"text": "Dimensionality reduction is an important issue in many machine learning and pattern recognition applications, and the trace ratio (TR) problem is an optimization problem involved in many dimensionality reduction algorithms. Conventionally, the solution is approximated via generalized eigenvalue decomposition due to the difficulty of the original problem. However, prior works have indicated that it is more reasonable to solve it directly than via the conventional way. In this brief, we propose a theoretical overview of the global optimum solution to the TR problem via the equivalent trace difference problem. Eigenvalue perturbation theory is introduced to derive an efficient algorithm based on the Newton-Raphson method. Theoretical issues on the convergence and efficiency of our algorithm compared with prior literature are proposed, and are further supported by extensive empirical results.",
"title": ""
},
{
"docid": "7655df3f32e6cf7a5545ae2231f71e7c",
"text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.",
"title": ""
}
] |
[
{
"docid": "f136e875f021ea3ea67a87c6d0b1e869",
"text": "Platelet-rich plasma (PRP) has been utilized for many years as a regenerative agent capable of inducing vascularization of various tissues using blood-derived growth factors. Despite this, drawbacks mostly related to the additional use of anti-coagulants found in PRP have been shown to inhibit the wound healing process. For these reasons, a novel platelet concentrate has recently been developed with no additives by utilizing lower centrifugation speeds. The purpose of this study was therefore to investigate osteoblast behavior of this novel therapy (injectable-platelet-rich fibrin; i-PRF, 100% natural with no additives) when compared to traditional PRP. Human primary osteoblasts were cultured with either i-PRF or PRP and compared to control tissue culture plastic. A live/dead assay, migration assay as well as a cell adhesion/proliferation assay were investigated. Furthermore, osteoblast differentiation was assessed by alkaline phosphatase (ALP), alizarin red and osteocalcin staining, as well as real-time PCR for genes encoding Runx2, ALP, collagen1 and osteocalcin. The results showed that all cells had high survival rates throughout the entire study period irrespective of culture-conditions. While PRP induced a significant 2-fold increase in osteoblast migration, i-PRF demonstrated a 3-fold increase in migration when compared to control tissue-culture plastic and PRP. While no differences were observed for cell attachment, i-PRF induced a significantly higher proliferation rate at three and five days when compared to PRP. Furthermore, i-PRF induced significantly greater ALP staining at 7 days and alizarin red staining at 14 days. A significant increase in mRNA levels of ALP, Runx2 and osteocalcin, as well as immunofluorescent staining of osteocalcin was also observed in the i-PRF group when compared to PRP. In conclusion, the results from the present study favored the use of the naturally-formulated i-PRF when compared to traditional PRP with anti-coagulants. Further investigation into the direct role of fibrin and leukocytes contained within i-PRF are therefore warranted to better elucidate their positive role in i-PRF on tissue wound healing.",
"title": ""
},
{
"docid": "625c5c89b9f0001a3eed1ec6fb498c23",
"text": "About a 100 years ago, the Drosophila white mutant marked the birth of Drosophila genetics. The white gene turned out to encode the first well studied ABC transporter in arthropods. The ABC gene family is now recognized as one of the largest transporter families in all kingdoms of life. The majority of ABC proteins function as primary-active transporters that bind and hydrolyze ATP while transporting a large diversity of substrates across lipid membranes. Although extremely well studied in vertebrates for their role in drug resistance, less is known about the role of this family in the transport of endogenous and exogenous substances in arthropods. The ABC families of five insect species, a crustacean and a chelicerate have been annotated in some detail. We conducted a thorough phylogenetic analysis of the seven arthropod and human ABC protein subfamilies, to infer orthologous relationships that might suggest conserved function. Most orthologous relationships were found in the ABCB half transporter, ABCD, ABCE and ABCF subfamilies, but specific expansions within species and lineages are frequently observed and discussed. We next surveyed the role of ABC transporters in the transport of xenobiotics/plant allelochemicals and their involvement in insecticide resistance. The involvement of ABC transporters in xenobiotic resistance in arthropods is historically not well documented, but an increasing number of studies using unbiased differential gene expression analysis now points to their importance. We give an overview of methods that can be used to link ABC transporters to resistance. ABC proteins have also recently been implicated in the mode of action and resistance to Bt toxins in Lepidoptera. Given the enormous interest in Bt toxicology in transgenic crops, such findings will provide an impetus to further reveal the role of ABC transporters in arthropods. 2014 The Authors. Published by Elsevier Ltd. Open access under CC BY-NC-ND license.",
"title": ""
},
{
"docid": "552ad2b05d0e7812bb5e17fb22c3de28",
"text": "Behavior-based agents are becoming increasingly used across a variety of platforms. The common approach to building such agents involves implementing the behavior synchronization and management algorithms directly in the agent’s programming environment. This process makes it hard, if not impossible, to share common components of a behavior architecture across different agent implementations. This lack of reuse also makes it cumbersome to experiment with different behavior architectures as it forces users to manipulate native code directly, e.g. C++ or Java. In this paper, we provide a high-level behavior-centric programming language and an automated code generation system which together overcome these issues and facilitate the process of implementing and experimenting with different behavior architectures. The language is specifically designed to allow clear and precise descriptions of a behavior hierarchy, and can be automatically translated by our generator into C++ code. Once compiled, this C++ code yields an executable that directs the execution of behaviors in the agent’s sense-plan-act cycle. We have tested this process with different platforms, including both software and robot agents, with various behavior architectures. We experienced the advantages of defining an agent by directly reasoning at the behavior architecture level followed by the automatic native code generation.",
"title": ""
},
{
"docid": "3535e70b1c264d99eff5797413650283",
"text": "MIMO is one of the techniques used in LTE Release 8 to achieve very high data rates. A field trial was performed in a pre-commercial LTE network. The objective is to investigate how well MIMO works with realistically designed handhelds in band 13 (746-756 MHz in downlink). In total, three different handheld designs were tested using antenna mockups. In addition to the mockups, a reference antenna design with less stringent restrictions on physical size and excellent properties for MIMO was used. The trial comprised test drives in areas with different characteristics and with different network load levels. The effects of hands holding the devices and the effect of using the device inside a test vehicle were also investigated. In general, it is very clear from the trial that MIMO works very well and gives a substantial performance improvement at the tested carrier frequency if the antenna design of the hand-held is well made with respect to MIMO. In fact, the best of the handhelds performed similar to the reference antenna.",
"title": ""
},
{
"docid": "8aa305f217314d60ed6c9f66d20a7abf",
"text": "The circadian timing system drives daily rhythmic changes in drug metabolism and controls rhythmic events in cell cycle, DNA repair, apoptosis, and angiogenesis in both normal tissue and cancer. Rodent and human studies have shown that the toxicity and anticancer activity of common cancer drugs can be significantly modified by the time of administration. Altered sleep/activity rhythms are common in cancer patients and can be disrupted even more when anticancer drugs are administered at their most toxic time. Disruption of the sleep/activity rhythm accelerates cancer growth. The complex circadian time-dependent connection between host, cancer and therapy is further impacted by other factors including gender, inter-individual differences and clock gene polymorphism and/or down regulation. It is important to take circadian timing into account at all stages of new drug development in an effort to optimize the therapeutic index for new cancer drugs. Better measures of the individual differences in circadian biology of host and cancer are required to further optimize the potential benefit of chronotherapy for each individual patient.",
"title": ""
},
{
"docid": "9c89c4c4ae75f9b003fca6696163619a",
"text": "We study a class of stochastic optimization models of expected utility in markets with stochastically changing investment opportunities. The prices of the primitive assets are modelled as diffusion processes whose coefficients evolve according to correlated diffusion factors. Under certain assumptions on the individual preferences, we are able to produce reduced form solutions. Employing a power transformation, we express the value function in terms of the solution of a linear parabolic equation, with the power exponent depending only on the coefficients of correlation and risk aversion. This reduction facilitates considerably the study of the value function and the characterization of the optimal hedging demand. The new results demonstrate an interesting connection with valuation techniques using stochastic differential utilities and also, with distorted measures in a dynamic setting.",
"title": ""
},
{
"docid": "d3d57d67d4384f916f9e9e48f3fcdcdb",
"text": "Web-based social networks have become popular as a medium for disseminating information and connecting like-minded people. The public accessibility of such networks with the ability to share opinions, thoughts, information, and experience offers great promise to enterprises and governments. In addition to individuals using such networks to connect to their friends and families, governments and enterprises have started exploiting these platforms for delivering their services to citizens and customers. However, the success of such attempts relies on the level of trust that members have with each other as well as with the service provider. Therefore, trust becomes an essential and important element of a successful social network. In this article, we present the first comprehensive review of social and computer science literature on trust in social networks. We first review the existing definitions of trust and define social trust in the context of social networks. We then discuss recent works addressing three aspects of social trust: trust information collection, trust evaluation, and trust dissemination. Finally, we compare and contrast the literature and identify areas for further research in social trust.",
"title": ""
},
{
"docid": "405a1e8badfb85dcd1d5cc9b4a0026d2",
"text": "It is of great practical importance to improve yield and quality of vegetables in soilless cultures. This study investigated the effects of iron-nutrition management on yield and quality of hydroponic-cultivated spinach (Spinacia oleracea L.). The results showed that mild Fe-deficient treatment (1 μM FeEDTA) yielded a greater biomass of edible parts than Fe-omitted treatment (0 μM FeEDTA) or Fe-sufficient treatments (10 and 50 μM FeEDTA). Conversely, mild Fe-deficient treatment had the lowest nitrate concentration in the edible parts out of all the Fe treatments. Interestingly, all the concentrations of soluble sugar, soluble protein and ascorbate in mild Fe-deficient treatments were higher than Fe-sufficient treatments. In addition, both phenolic concentration and DPPH scavenging activity in mild Fe-deficient treatments were comparable with those in Fe-sufficient treatments, but were higher than those in Fe-omitted treatments. Therefore, we concluded that using a mild Fe-deficient nutrition solution to cultivate spinach not only would increase yield, but also would improve quality.",
"title": ""
},
{
"docid": "781ebbf85a510cfd46f0c824aa4aba7e",
"text": "Human activity recognition (HAR) is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. Recently, with the emergence and successful deployment of deep learning techniques for image classification, researchers have migrated from traditional handcrafting to deep learning techniques for HAR. However, handcrafted representation-based approaches are still widely used due to some bottlenecks such as computational complexity of deep learning techniques for activity recognition. However, approaches based on handcrafted representation are not able to handle complex scenarios due to their limitations and incapability; therefore, resorting to deep learning-based techniques is a natural option. This review paper presents a comprehensive survey of both handcrafted and learning-based action representations, offering comparison, analysis, and discussions on these approaches. In addition to this, the well-known public datasets available for experimentations and important applications of HAR are also presented to provide further insight into the field. This is the first review paper of its kind which presents all these aspects of HAR in a single review article with comprehensive coverage of each part. Finally, the paper is concluded with important discussions and research directions in the domain of HAR.",
"title": ""
},
{
"docid": "0c805b994e89c878a62f2e1066b0a8e7",
"text": "3D spatial data modeling is one of the key research problems in 3D GIS. More and more applications depend on these 3D spatial data. Mostly, these data are stored in Geo-DBMSs. However, recent Geo-DBMSs do not support 3D primitives modeling, it only able to describe a single-attribute of the third-dimension, i.e. modeling 2.5D datasets that used 2D primitives (plus a single z-coordinate) such as polygons in 3D space. This research focuses on 3D topological model based on space partition for 3D GIS, for instance, 3D polygons or tetrahedron form a solid3D object. Firstly, this report discusses formal definitions of 3D spatial objects, and then all the properties of each object primitives will be elaborated in detailed. The author also discusses methods for constructing the topological properties to support object semantics is introduced. The formal framework to describe the spatial model, database using Oracle Spatial is also given in this report. All related topological structures that forms the object features are discussed in detail. All related features are tested using real 3D spatial dataset of 3D building. Finally, the report concludes the experiment via visualization of using AutoDesk Map 3D.",
"title": ""
},
{
"docid": "1b030e734e3ddfb5e612b1adc651b812",
"text": "Clustering1is an essential task in many areas such as machine learning, data mining and computer vision among others. Cluster validation aims to assess the quality of partitions obtained by clustering algorithms. Several indexes have been developed for cluster validation purpose. They can be external or internal depending on the availability of ground truth clustering. This paper deals with the issue of cluster validation of large data set. Indeed, in the era of big data this task becomes even more difficult to handle and requires parallel and distributed approaches. In this work, we are interested in external validation indexes. More specifically, this paper proposes a model for purity based cluster validation in parallel and distributed manner using Map-Reduce paradigm in order to be able to scale with increasing dataset sizes.\n The experimental results show that our proposed model is valid and achieves properly cluster validation of large datasets.",
"title": ""
},
{
"docid": "d71040311b8753299377b02023ba5b4c",
"text": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.",
"title": ""
},
{
"docid": "dc2ea774fb11bc09e80b9de3acd7d5a6",
"text": "The Hough transform is a well-known straight line detection algorithm and it has been widely used for many lane detection algorithms. However, its real-time operation is not guaranteed due to its high computational complexity. In this paper, we designed a Hough transform hardware accelerator on FPGA to process it in real time. Its FPGA logic area usage was reduced by limiting the angles of the lines to (-20, 20) degrees which are enough for lane detection applications, and its arithmetic computations were performed in parallel to speed up the processing time. As a result of FPGA synthesis using Xilinx Vertex-5 XC5VLX330 device, it occupies 4,521 slices and 25.6Kbyte block memory giving performance of 10,000fps in VGA images(5000 edge points). The proposed hardware on FPGA (0.1ms) is 450 times faster than the software implementation on ARM Cortex-A9 1.4GHz (45ms). Our Hough transform hardware was verified by applying it to the newly developed LDWS (lane departure warning system).",
"title": ""
},
{
"docid": "dd726458660c3dfe05bd775df562e188",
"text": "Maternally deprived rats were treated with tianeptine (15 mg/kg) once a day for 14 days during their adult phase. Their behavior was then assessed using the forced swimming and open field tests. The BDNF, NGF and energy metabolism were assessed in the rat brain. Deprived rats increased the immobility time, but tianeptine reversed this effect and increased the swimming time; the BDNF levels were decreased in the amygdala of the deprived rats treated with saline and the BDNF levels were decreased in the nucleus accumbens within all groups; the NGF was found to have decreased in the hippocampus, amygdala and nucleus accumbens of the deprived rats; citrate synthase was increased in the hippocampus of non-deprived rats treated with tianeptine and the creatine kinase was decreased in the hippocampus and amygdala of the deprived rats; the mitochondrial complex I and II–III were inhibited, and tianeptine increased the mitochondrial complex II and IV in the hippocampus of the non-deprived rats; the succinate dehydrogenase was increased in the hippocampus of non-deprived rats treated with tianeptine. So, tianeptine showed antidepressant effects conducted on maternally deprived rats, and this can be attributed to its action on the neurochemical pathways related to depression.",
"title": ""
},
{
"docid": "79593cc56da377d834f33528b833641f",
"text": "Machine learning offers a fantastically powerful toolkit f or building complex systems quickly. This paper argues that it is dangerous to think of these quick wins as coming for free. Using the framework of technical debt , we note that it is remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning. The goal of this paper is hig hlight several machine learning specific risk factors and design patterns to b e avoided or refactored where possible. These include boundary erosion, entanglem ent, hidden feedback loops, undeclared consumers, data dependencies, changes i n the external world, and a variety of system-level anti-patterns. 1 Machine Learning and Complex Systems Real world software engineers are often faced with the chall enge of moving quickly to ship new products or services, which can lead to a dilemma between spe ed of execution and quality of engineering. The concept of technical debtwas first introduced by Ward Cunningham in 1992 as a way to help quantify the cost of such decisions. Like incurri ng fiscal debt, there are often sound strategic reasons to take on technical debt. Not all debt is n ecessarily bad, but technical debt does tend to compound. Deferring the work to pay it off results in i ncreasing costs, system brittleness, and reduced rates of innovation. Traditional methods of paying off technical debt include re factoring, increasing coverage of unit tests, deleting dead code, reducing dependencies, tighten ng APIs, and improving documentation [4]. The goal of these activities is not to add new functionality, but to make it easier to add future improvements, be cheaper to maintain, and reduce the likeli hood of bugs. One of the basic arguments in this paper is that machine learn ing packages have all the basic code complexity issues as normal code, but also have a larger syst em-level complexity that can create hidden debt. Thus, refactoring these libraries, adding bet ter unit tests, and associated activity is time well spent but does not necessarily address debt at a systems level. In this paper, we focus on the system-level interaction betw e n machine learning code and larger systems as an area where hidden technical debt may rapidly accum ulate. At a system-level, a machine learning model may subtly erode abstraction boundaries. It may be tempting to re-use input signals in ways that create unintended tight coupling of otherw ise disjoint systems. Machine learning packages may often be treated as black boxes, resulting in la rge masses of “glue code” or calibration layers that can lock in assumptions. Changes in the exte rnal world may make models or input signals change behavior in unintended ways, ratcheting up m aintenance cost and the burden of any debt. Even monitoring that the system as a whole is operating s intended may be difficult without careful design.",
"title": ""
},
{
"docid": "6cad42e549f449c7156b0a07e2e02726",
"text": "Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent requirements, such as low latency, since resources can be requested on-demand simultaneously by multiple devices at different locations. It is then necessary to adapt existing network technologies to future needs and design new architectural concepts to help meet these strict requirements. This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities. Our approach follows the guidelines of the European Telecommunications Standards Institute (ETSI) NFV MANO architecture extending it with additional software components. The contribution of our work is its fully-integrated fog node management system alongside the foreseen application layer Peer-to-Peer (P2P) fog protocol based on the Open Shortest Path First (OSPF) routing protocol for the exchange of application service provisioning information between fog nodes. Evaluations of an anomaly detection use case based on an air monitoring application are presented. Our results show that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions.",
"title": ""
},
{
"docid": "d59d1ac7b3833ee1e60f7179a4a9af99",
"text": "s Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. GJCST Classification : C.1.4, C.2.1 Research Issues in Cloud Computing Strictly as per the compliance and regulations of: Research Issues in Cloud Computing V. Krishna Reddy , B. Thirumala Rao , Dr. L.S.S. Reddy , P. Sai Kiran ABSTRACT : Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges.",
"title": ""
},
{
"docid": "fedcb2bd51b9fd147681ae23e03c7336",
"text": "Epidemiological studies have revealed the important role that foodstuffs of vegetable origin have to play in the prevention of numerous illnesses. The natural antioxidants present in such foodstuffs, among which the fl avonoids are widely present, may be responsible for such an activity. Flavonoids are compounds that are low in molecular weight and widely distributed throughout the vegetable kingdom. They may be of great utility in states of accute or chronic diarrhoea through the inhibition of intestinal secretion and motility, and may also be benefi cial in the reduction of chronic infl ammatory damage in the intestine, by affording protection against oxidative stress and by preserving mucosal function. For this reason, the use of these agents is recommended in the treatment of infl ammatory bowel disease, in which various factors are involved in extreme immunological reactions, which lead to chronic intestinal infl ammation.",
"title": ""
},
{
"docid": "a89c0a16d161ef41603583567f85a118",
"text": "360° Video services with resolutions of UHD and beyond for Virtual Reality head mounted displays are a challenging task due to limits of video decoders in constrained end devices. Adaptivity to the current user viewport is a promising approach but incurs significant encoding overhead when encoding per user or set of viewports. A more efficient way to achieve viewport adaptive streaming is to facilitate motion-constrained HEVC tiles. Original content resolution within the user viewport is preserved while content currently not presented to the user is delivered in lower resolution. A lightweight aggregation of varying resolution tiles into a single HEVC bitstream can be carried out on-the-fly and allows usage of a single decoder instance on the end device.",
"title": ""
},
{
"docid": "241f5a88f53c929cc11ce0edce191704",
"text": "Enabled by mobile and wearable technology, personal health data delivers immense and increasing value for healthcare, benefiting both care providers and medical research. The secure and convenient sharing of personal health data is crucial to the improvement of the interaction and collaboration of the healthcare industry. Faced with the potential privacy issues and vulnerabilities existing in current personal health data storage and sharing systems, as well as the concept of self-sovereign data ownership, we propose an innovative user-centric health data sharing solution by utilizing a decentralized and permissioned blockchain to protect privacy using channel formation scheme and enhance the identity management using the membership service supported by the blockchain. A mobile application is deployed to collect health data from personal wearable devices, manual input, and medical devices, and synchronize data to the cloud for data sharing with healthcare providers and health insurance companies. To preserve the integrity of health data, within each record, a proof of integrity and validation is permanently retrievable from cloud database and is anchored to the blockchain network. Moreover, for scalable and performance considerations, we adopt a tree-based data processing and batching method to handle large data sets of personal health data collected and uploaded by the mobile platform.",
"title": ""
}
] |
scidocsrr
|
22c5fd7ddba330aa4189160f32fafa49
|
Text Embeddings for Retrieval From a Large Knowledge Base
|
[
{
"docid": "b5a5f8fc7015e8a9632376b81fdfcaa6",
"text": "Despite the fast developmental pace of new sentence embedding methods, it is still challenging to find comprehensive evaluations of these different techniques. In the past years, we saw significant improvements in the field of sentence embeddings and especially towards the development of universal sentence encoders that could provide inductive transfer to a wide variety of downstream tasks. In this work, we perform a comprehensive evaluation of recent methods using a wide variety of downstream and linguistic feature probing tasks. We show that a simple approach using bag-of-words with a recently introduced language model for deep contextdependent word embeddings proved to yield better results in many tasks when compared to sentence encoders trained on entailment datasets. We also show, however, that we are still far away from a universal encoder that can perform consistently across several downstream tasks.",
"title": ""
}
] |
[
{
"docid": "ad48ca7415808c4337c0b6eb593005d6",
"text": "Neuroscience is experiencing a data revolution in which many hundreds or thousands of neurons are recorded simultaneously. Currently, there is little consensus on how such data should be analyzed. Here we introduce LFADS (Latent Factor Analysis via Dynamical Systems), a method to infer latent dynamics from simultaneously recorded, single-trial, high-dimensional neural spiking data. LFADS is a sequential model based on a variational auto-encoder. By making a dynamical systems hypothesis regarding the generation of the observed data, LFADS reduces observed spiking to a set of low-dimensional temporal factors, per-trial initial conditions, and inferred inputs. We compare LFADS to existing methods on synthetic data and show that it significantly out-performs them in inferring neural firing rates and latent dynamics.",
"title": ""
},
{
"docid": "0d48e7715f3e0d74407cc5a21f2c322a",
"text": "Every teacher of linear algebra should be familiar with the matrix singular value decomposition (or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well known theory of diagonalization for symmetric matrices makes the topic immediately accessible to linear algebra teachers, and indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Strang was aware of these facts when he introduced the SVD in his now classical text [22, page 142], observing",
"title": ""
},
{
"docid": "4035273cce65e3fe73e0a000c1726c0d",
"text": "In recent years, organizations have invested heavily in e-procurement technology solutions. However, an estimation of the value of the technology-enabled procurement process is often lacking. Our paper presents a rigorous methodological approach to the analysis of e-procurement benefits. Business process simulations are used to analyze the benefits of both technological and organizational changes related to e-procurement. The approach enables an estimation of both the average and variability of procurement costs and benefits, workload, and lead times. In addition, the approach enables optimization of a procurement strategy (e.g., approval levels). Finally, an innovative approach to estimation of value at risk is shown.",
"title": ""
},
{
"docid": "06caed57da5784de254b5efcf1724003",
"text": "The validity of any traffic simulation model depends on its ability to generate representative driver acceleration profiles. This paper studies the effectiveness of recurrent neural networks in predicting the acceleration distributions for car following on highways. The long short-term memory recurrent networks are trained and used to propagate the simulated vehicle trajectories over 10-s horizons. On the basis of several performance metrics, the recurrent networks are shown to generally match or outperform baseline methods in replicating driver behavior, including smoothness and oscillatory characteristics present in real trajectories. This paper reveals that the strong performance is due to the ability of the recurrent network to identify recent trends in the ego-vehicle's state, and recurrent networks are shown to perform as, well as feedforward networks with longer histories as inputs.",
"title": ""
},
{
"docid": "b8466da90f2e75df2cc8453564ddb3e8",
"text": "Deep neural networks have recently shown impressive classification performance on a diverse set of visual tasks. When deployed in real-world (noise-prone) environments, it is equally important that these classifiers satisfy robustness guarantees: small perturbations applied to the samples should not yield significant losses to the performance of the predictor. The goal of this paper is to discuss the robustness of deep networks to a diverse set of perturbations that may affect the samples in practice, including adversarial perturbations, random noise, and geometric transformations. Our paper further discusses the recent works that build on the robustness analysis to provide geometric insights on the classifier’s decision surface, which help in developing a better understanding of deep nets. The overview finally presents recent solutions that attempt to increase the robustness of deep networks. We hope that this review paper will contribute shedding light on the open research challenges in the robustness of deep networks, and will stir interest in the analysis of their fundamental properties.",
"title": ""
},
{
"docid": "47c5f3a7230ac19b8889ced2d8f4318a",
"text": "This paper deals with the setting parameter optimization procedure for a multi-phase induction heating system considering transverse flux heating. This system is able to achieve uniform static heating of different thin/size metal pieces without movable inductor parts, yokes or magnetic screens. The goal is reached by the predetermination of the induced power density distribution using an optimization procedure that leads to the required inductor supplying currents. The purpose of the paper is to describe the optimization program with the different solution obtained and to show that some compromise must be done between the accuracy of the temperature profile and the energy consumption.",
"title": ""
},
{
"docid": "0358eea62c126243134ed1cd2ac97121",
"text": "In the absence of vision, grasping an object often relies on tactile feedback from the ngertips. As the nger pushes the object, the ngertip can feel the contact point move. If the object is known in advance, from this motion the nger may infer the location of the contact point on the object and thereby the object pose. This paper primarily investigates the problem of determining the pose (orientation and position) and motion (velocity and angular velocity) of a planar object with known geometry from such contact motion generated by pushing. A dynamic analysis of pushing yields a nonlinear system that relates through contact the object pose and motion to the nger motion. The contact motion on the ngertip thus encodes certain information about the object pose. Nonlinear observability theory is employed to show that such information is su cient for the nger to \\observe\" not only the pose but also the motion of the object. Therefore a sensing strategy can be realized as an observer of the nonlinear dynamical system. Two observers are subsequently introduced. The rst observer, based on the result of [15], has its \\gain\" determined by the solution of a Lyapunov-like equation; it can be activated at any time instant during a push. The second observer, based on Newton's method, solves for the initial (motionless) object pose from three intermediate contact points during a push. Under the Coulomb friction model, the paper copes with support friction in the plane and/or contact friction between the nger and the object. Extensive simulations have been done to demonstrate the feasibility of the two observers. Preliminary experiments (with an Adept robot) have also been conducted. A contact sensor has been implemented using strain gauges. Accepted by the International Journal of Robotics Research.",
"title": ""
},
{
"docid": "33bd561e2d8e1799d5d5156cbfe3f2e5",
"text": "OBJECTIVE\nTo assess the effects of Balint groups on empathy measured by the Consultation And Relational Empathy Measure (CARE) scale rated by standardized patients during objective structured clinical examination and self-rated Jefferson's School Empathy Scale - Medical Student (JSPE-MS©) among fourth-year medical students.\n\n\nMETHODS\nA two-site randomized controlled trial were planned, from October 2015 to December 2015 at Paris Diderot and Paris Descartes University, France. Eligible students were fourth-year students who gave their consent to participate. Participants were allocated in equal proportion to the intervention group or to the control group. Participants in the intervention group received a training of 7 sessions of 1.5-hour Balint groups, over 3months. The main outcomes were CARE and the JSPE-MS© scores at follow-up.\n\n\nRESULTS\nData from 299 out of 352 randomized participants were analyzed: 155 in the intervention group and 144 in the control group, with no differences in baseline measures. There was no significant difference in CARE score at follow-up between the two groups (P=0.49). The intervention group displayed significantly higher JSPE-MS© score at follow-up than the control group [Mean (SD): 111.9 (10.6) versus 107.7 (12.7), P=0.002]. The JSPE-MS© score increased from baseline to follow-up in the intervention group, whereas it decreased in the control group [1.5 (9.1) versus -1.8 (10.8), P=0.006].\n\n\nCONCLUSIONS\nBalint groups may contribute to promote clinical empathy among medical students.\n\n\nTRIAL REGISTRATION\nNCT02681380.",
"title": ""
},
{
"docid": "b5515ce58a5f40fb5129560c9bdc3b10",
"text": "Lipoid pneumonia in children follows mineral oil aspiration and may result in acute respiratory failure. Majority of the patients recover without long-term morbidity, though a few may be left with residual damage to the lungs. We report a case of a two-and-a-half-year-old child with persistent lipoid pneumonia following accidental inhalation of machine oil, who was successfully treated with steroids.",
"title": ""
},
{
"docid": "f7f1deeda9730056876db39b4fe51649",
"text": "Fracture in bone occurs when an external force exercised upon the bone is more than what the bone can tolerate or bear. As, its consequence structure and muscular power of the bone is disturbed and bone becomes frail, which causes tormenting pain on the bone and ends up in the loss of functioning of bone. Accurate bone structure and fracture detection is achieved using various algorithms which removes noise, enhances image details and highlights the fracture region. Automatic detection of fractures from x-ray images is considered as an important process in medical image analysis by both orthopaedic and radiologic aspect. Manual examination of x-rays has multitude drawbacks. The process is time consuming and subjective. In this paper we discuss several digital image processing techniques applied in fracture detection of bone. This led us to study techniques that have been applied to images obtained from different modalities like x-ray, CT, MRI and ultrasound. Keywords— Fracture detection, Medical Imaging, Morphology, Tibia, X-ray image",
"title": ""
},
{
"docid": "d107d7bdfa1cd24985ec49b54b267ba7",
"text": "The classification and the count of white blood cells in microscopy images allows the in vivo assessment of a wide range of important hematic pathologies (i.e., from presence of infections to leukemia). Nowadays, the morphological cell classification is typically made by experienced operators. Such a procedure presents undesirable drawbacks: slowness and it presents a not standardized accuracy since it depends on the operator's capabilities and tiredness. Only few attempts of partial/full automated systems based on image-processing systems are present in literature and they are still at prototype stage. This paper presents a methodology to achieve an automated detection and classification of leucocytes by microscope color images. The proposed system firstly individuates in the blood image the leucocytes from the others blood cells, then it extracts morphological indexes and finally it classifies the leucocytes by a neural classifier in Basophil, Eosinophil, Lymphocyte, Monocyte and Neutrophil.",
"title": ""
},
{
"docid": "04f4058d37a33245abf8ed9acd0af35d",
"text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.",
"title": ""
},
{
"docid": "b93a949c1c509bf8e5d36a9ec2cb37a5",
"text": "At first glance, agile methods and global software development might seem incompatible. Agile methods stress continuous face-to-face communication, whereas communication has been reported as the biggest problem of global software development. One challenge to solve is how to apply agile practices in settings where continuous face-to-face interaction is missing. However, agile methods have been successfully used in distributed projects, indicating that they could benefit global software development. This paper discusses potential benefits and challenges of adopting agile methods in global software development. The literature on real industrial case studies reporting on experiences of using agile methods in distributed projects is still scarce. Therefore we suggest further research on the topic. We present our plans for research in companies using agile methods in their distributed projects. We also intend to test the use of agile principles in globally distributed student projects developing software for industrial clients",
"title": ""
},
{
"docid": "b174bbcb91d35184674532b6ab22dcdf",
"text": "Many studies have confirmed the benefit of gamification on learners’ motivation. However, gamification may also demotivate some learners, or learners may focus on the gamification elements instead of the learning content. Some researchers have recommended building learner models that can be used to adapt gamification elements based on learners’ personalities. Building such a model requires a strong understanding of the relationship between gamification and personality. Existing empirical work has focused on measuring knowledge gain and learner preference. These findings may not be reliable because the analyses are based on learners who complete the study and because they rely on self-report from learners. This preliminary study explores a different approach by allowing learners to drop out at any time and then uses the number of students left as a proxy for motivation and engagement. Survival analysis is used to analyse the data. The results confirm the benefits of gamification and provide some pointers to how this varies with personality.",
"title": ""
},
{
"docid": "b271916d455789760d1aa6fda6af85c3",
"text": "Over the last decade, automated vehicles have been widely researched and their massive potential has been verified through several milestone demonstrations. However, there are still many challenges ahead. One of the biggest challenges is integrating them into urban environments in which dilemmas occur frequently. Conventional automated driving strategies make automated vehicles foolish in dilemmas such as making lane-change in heavy traffic, handling a yellow traffic light and crossing a double-yellow line to pass an illegally parked car. In this paper, we introduce a novel automated driving strategy that allows automated vehicles to tackle these dilemmas. The key insight behind our automated driving strategy is that expert drivers understand human interactions on the road and comply with mutually-accepted rules, which are learned from countless experiences. In order to teach the driving strategy of expert drivers to automated vehicles, we propose a general learning framework based on maximum entropy inverse reinforcement learning and Gaussian process. Experiments are conducted on a 5.2 km-long campus road at Seoul National University and demonstrate that our framework performs comparably to expert drivers in planning trajectories to handle various dilemmas.",
"title": ""
},
{
"docid": "fe446f500549cedce487b78a133cbc45",
"text": "Drug addiction manifests as a compulsive drive to take a drug despite serious adverse consequences. This aberrant behaviour has traditionally been viewed as bad 'choices' that are made voluntarily by the addict. However, recent studies have shown that repeated drug use leads to long-lasting changes in the brain that undermine voluntary control. This, combined with new knowledge of how environmental, genetic and developmental factors contribute to addiction, should bring about changes in our approach to the prevention and treatment of addiction.",
"title": ""
},
{
"docid": "32faa5a14922d44101281c783cf6defb",
"text": "A novel multifocus color image fusion algorithm based on the quaternion wavelet transform (QWT) is proposed in this paper, aiming at solving the image blur problem. The proposed method uses a multiresolution analysis procedure based on the quaternion wavelet transform. The performance of the proposed fusion scheme is assessed by some experiments, and the experimental results show that the proposed method is effective and performs better than the existing fusion methods.",
"title": ""
},
{
"docid": "3615093867394664629391e515fd4118",
"text": "Using individual data on voting and political parties manifestos in European countries, we empirically characterize the drivers of voting for populist parties (the demand side) as well as the presence of populist parties (the supply side). We show that the economic insecurity drivers of the demand of populism are significant, especially when considering the key interactions with turnout incentives, neglected in previous studies. Once turnout effects are taken into account, economic insecurity drives consensus to populist policies directly and through indirect negative effects on trust and attitudes towards immigrants. On the supply side, populist parties are more likely to emerge when countries are faced with a systemic crisis of economic security. The orientation choice of populist parties, i.e., whether they arise on left or right of the political spectrum, is determined by the availability of political space. The typical mainstream parties response is to reduce the distance of their platform from that of successful populist entrants, amplifying the aggregate supply of populist policies.",
"title": ""
},
{
"docid": "feda50d2876074ce37276d6df7d2823f",
"text": "Word embedding models have become a fundamental component in a wide range of Natural Language Processing (NLP) applications. However, embeddings trained on human-generated corpora have been demonstrated to inherit strong gender stereotypes that reflect social constructs. To address this concern, in this paper, we propose a novel training procedure for learning gender-neutral word embeddings. Our approach aims to preserve gender information in certain dimensions of word vectors while compelling other dimensions to be free of gender influence. Based on the proposed method, we generate a GenderNeutral variant of GloVe (GN-GloVe). Quantitative and qualitative experiments demonstrate that GN-GloVe successfully isolates gender information without sacrificing the functionality of the embedding model.",
"title": ""
},
{
"docid": "cee9b099f6ea087376b56067620e1c64",
"text": "This paper presents a set of techniques for predicting aggressive comments in social media. In a time when cyberbullying has, unfortunately, made its entrance into society and Internet, it becomes necessary to find ways for preventing and overcoming this phenomenon. One of these concerns the use of machine learning techniques for automatically detecting cases of cyberbullying; a primary task within this cyberbullying detection consists of aggressive text detection. We concretely explore different computational techniques for carrying out this task, either as a classification or as a regression problem, and our results suggest that a key feature is the identification of profane words.",
"title": ""
}
] |
scidocsrr
|
19ed707d952f4078a6a30668bd6ded43
|
Representing Animations by Principal Components
|
[
{
"docid": "43db0f06e3de405657996b46047fa369",
"text": "Given two or more objects of general topology, intermediate objects are constructed by a distance field metamorphosis. In the presented method the interpolation of the distance field is guided by a warp function controlled by a set of corresponding anchor points. Some rules for defining a smooth least-distorting warp function are given. To reduce the distortion of the intermediate shapes, the warp function is decomposed into a rigid rotational part and an elastic part. The distance field interpolation method is modified so that the interpolation is done in correlation with the warp function. The method provides the animator with a technique that can be used to create a set of models forming a smooth transition between pairs of a given sequence of keyframe models. The advantage of the new approach is that it is capable of morphing between objects having a different topological genus where no correspondence between the geometric primitives of the models needs to be established. The desired correspondence is defined by an animator in terms of a relatively small number of anchor points",
"title": ""
}
] |
[
{
"docid": "d99005ab76808d74611bc290442019ec",
"text": "Over the last decade, the isoxazoline motif has become the intense focus of crop protection and animal health companies in their search for novel pesticides and ectoparasiticides. Herein we report the discovery of sarolaner, a proprietary, optimized-for-animal health use isoxazoline, for once-a-month oral treatment of flea and tick infestation on dogs.",
"title": ""
},
{
"docid": "2a718f193be63630087bd6c5748b332a",
"text": "This study investigates the intrasentential assignment of reference to pronouns (him, her) and anaphors (himself, herself) as characterized by Binding Theory in a subgroup of \"Grammatical specifically language-impaired\" (SLI) children. The study aims to (1) provide further insight into the underlying nature of Grammatical SLI in children and (2) elucidate the relationship between different sources of knowledge, that is, syntactic knowledge versus knowledge of lexical properties and pragmatic inference in the assignment of intrasentential coreference. In two experiments, using a picture-sentence pair judgement task, the children's knowledge of the lexical properties versus syntactic knowledge (Binding Principles A and B) in the assignment of reflexives and pronouns was investigated. The responses of 12 Grammatical SLI children (aged 9:3 to 12:10) and three language ability (LA) control groups of 12 children (aged 5:9 to 9:1) were compared. The results indicated that the SLI children and the LA controls may use a combination of conceptual-lexical and pragmatic knowledge (e.g., semantic gender, reflexive marking of the predicate, and assignment of theta roles) to help assign reference to anaphors and pronouns. The LA controls also showed appropriate use of the syntactic knowledge. In contrast, the SLI children performed at chance when syntactic information was crucially required to rule out inappropriate coreference. The data are consistent with an impairment with the (innate) syntactic knowledge characterized by Binding Theory which underlies reference assignment to anaphors and pronouns. We conclude that the SLI children's syntactic representation is underspecified with respect to coindexation between constituents and the syntactic properties of pronouns. Support is provided for the proposal that Grammatical SLI children have a modular language deficit with syntactic dependent structural relationships between constituents, that is, a Representational Deficit with Dependent Relationships (RDDR). Further consideration of the linguistic characteristics of this deficit is made in relation to the hypothesized syntactic representations of young normally developing children.",
"title": ""
},
{
"docid": "b4b2c5f66c948cbd4c5fbff7f9062f12",
"text": "China is taking major steps to improve Beijing’s air quality for the 2008 Olympic Games. However, concentrations of fine particulate matter and ozone in Beijing often exceed healthful levels in the summertime. Based on the US EPA’s Models-3/CMAQ model simulation over the Beijing region, we estimate that about 34% of PM2.5 on average and 35–60% of ozone during high ozone episodes at the Olympic Stadium site can be attributed to sources outside Beijing. Neighboring Hebei and Shandong Provinces and the Tianjin Municipality all exert significant influence on Beijing’s air quality. During sustained wind flow from the south, Hebei Province can contribute 50–70% of Beijing’s PM2.5 concentrations and 20–30% of ozone. Controlling only local sources in Beijing will not be sufficient to attain the air quality goal set for the Beijing Olympics. There is an urgent need for regional air quality management studies and new emission control strategies to ensure that the air quality goals for 2008 are met. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "003be771526441c38f91f96b7ecb802f",
"text": "Robotics research and education have gained significant attention in recent years due to increased development and commercial deployment of industrial and service robots. A majority of researchers working on robot grasping and object manipulation tend to utilize commercially available robot-manipulators equipped with various end effectors for experimental studies. However, commercially available robotic grippers are often expensive and are not easy to modify for specific purposes. To extend the choice of robotic end effectors freely available to researchers and educators, we present an open-source low-cost three-finger robotic gripper platform for research and educational purposes. The 3-D design model of the gripper is presented and manufactured with a minimal number of 3-D-printed components and an off-the-shelf servo actuator. An underactuated finger and gear train mechanism, with an overall gripper assembly design, are described in detail, followed by illustrations and a discussion of the gripper grasping performance and possible gripper platform modifications. The presented open-source gripper platform computer-aided design model is released for downloading on the authors research lab website (<;uri xlink:href=\"http://www.alaris.kz\" xlink:type=\"simple\">www.alaris.kz<;/uri>) and can be utilized by robotics researchers and educators as a design platform to build their own robotic end effector solutions for research and educational purposes.",
"title": ""
},
{
"docid": "0ddc7bcfd60d56a0d42cd5424d3a1a71",
"text": "In LLC resonant converters, the variable duty-cycle control is usually combined with a variable frequency control to widen the gain range, improve the light-load efficiency, or suppress the inrush current during start-up. However, a proper analytical model for the variable duty-cycle controlled LLC converter is still not available due to the complexity of operation modes and the nonlinearity of steady-state equations. This paper makes the efforts to develop an analytical model for the LLC converter with variable duty-cycle control. All possible operation models and critical operation characteristics are identified and discussed. The proposed model enables a better understanding of the operation characteristics and fast parameter design of the LLC converter, which otherwise cannot be achieved by the existing simulation based methods and numerical models. The results obtained from the proposed model are in well agreement with the simulations and the experimental verifications from a 500-W prototype.",
"title": ""
},
{
"docid": "32bdd9f720989754744eddb9feedbf32",
"text": "Readability depends on many factors ranging from shallow features like word length to semantic ones like coherence. We introduce novel graph-based coherence features based on frequent subgraphs and compare their ability to assess the readability of Wall Street Journal articles. In contrast to Pitler and Nenkova (2008) some of our graph-based features are significantly correlated with human judgments. We outperform Pitler and Nenkova (2008) in the readability ranking task by more than 5% accuracy thus establishing a new state-of-the-art on this dataset.",
"title": ""
},
{
"docid": "a4b56dcf245b5e823ea12695abc61a77",
"text": "We study complex Chern-Simons theory on a Seifert manifold M3 by embedding it into string theory. We show that complex Chern-Simons theory on M3 is equivalent to a topologically twisted supersymmetric theory and its partition function can be naturally regularized by turning on a mass parameter. We find that the dimensional reduction of this theory to 2d gives the low energy dynamics of vortices in four-dimensional gauge theory, the fact apparently overlooked in the vortex literature. We also generalize the relations between 1) the Verlinde algebra, 2) quantum cohomology of the Grassmannian, 3) Chern-Simons theory on Σ × S1 and 4) index of a spinc Dirac operator on the moduli space of flat connections to a new set of relations between 1) the “equivariant Verlinde algebra” for a complex group, 2) the equivariant quantum K-theory of vortex moduli spaces, 3) complex Chern-Simons theory on Σ × S1 and 4) the equivariant index of a spinc Dirac operator on the moduli space of Higgs bundles. CALT-TH-2014-171 ar X iv :1 50 1. 01 31 0v 1 [ he pth ] 6 J an 2 01 5",
"title": ""
},
{
"docid": "18e95e39417fcb4dd6e294a1ad8fcfd7",
"text": "The paper motivates the need to acquire methodological knowledge for involving children as test users in usability testing. It introduces a methodological framework for delineating comparative assessments of usability testing methods for children participants. This framework consists in three dimensions: (1) assessment criteria for usability testing methods, (2) characteristics describing usability testing methods and, finally, (3) characteristics of children that may impact upon the process and the result of usability testing. Two comparative studies are discussed in the context of this framework along with implications for future research. q 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "4eebd4a2d5c50a2d7de7c36c5296786d",
"text": "Depth information has been used in computer vision for a wide variety of tasks. Since active range sensors are currently available at low cost, high-quality depth maps can be used as relevant input for many applications. Background subtraction and video segmentation algorithms can be improved by fusing depth and color inputs, which are complementary and allow one to solve many classic color segmentation issues. In this paper, we describe one fusion method to combine color and depth based on an advanced color-based algorithm. This technique has been evaluated by means of a complete dataset recorded with Microsoft Kinect, which enables comparison with the original method. The proposed method outperforms the others in almost every test, showing more robustness to illumination changes, shadows, reflections and camouflage.",
"title": ""
},
{
"docid": "795d4e73b3236a2b968609c39ce8f417",
"text": "In this paper, we are introducing an intelligent valet parking management system that guides the cars to autonomously park within a parking lot. The IPLMS for Intelligent Parking Lot Management System, consists of two modules: 1) a model car with a set of micro-controllers and sensors which can scan the environment for suitable parking spot and avoid collision to obstacles, and a Parking Lot Management System (IPLMS) which screens the parking spaces within the parking lot and offers guidelines to the car. The model car has the capability to autonomously maneuver within the parking lot using a fuzzy logic algorithm, and execute parking in the spot determined by the IPLMS, using a parking algorithm. The car receives the instructions from the IPLMS through a wireless communication link. The IPLMS has the flexibility to be adopted by any parking management system, and can potentially save the clients time to look for a parking spot, and/or to stroll from an inaccessible parking space. Moreover, the IPLMS can decrease the financial burden from the parking lot management by offering an easy-to-install system for self-guided valet parking.",
"title": ""
},
{
"docid": "b1c036f2a003ada4eaa965543e7e6d36",
"text": "Seaweed and their constituents have been traditionally employed for the management of various human pathologic conditions such as edema, urinary disorders and inflammatory anomalies. The current study was performed to investigate the antioxidant and anti-arthritic effects of fucoidan from Undaria pinnatifida. A noteworthy in vitro antioxidant potential at 500μg/ml in 2, 2-diphenyl-1-picrylhydrazyl scavenging assay (80% inhibition), nitrogen oxide inhibition assay (71.83%), hydroxyl scavenging assay (71.92%), iron chelating assay (73.55%) and a substantial ascorbic acid equivalent reducing power (399.35μg/mg ascorbic acid equivalent) and total antioxidant capacity (402.29μg/mg AAE) suggested fucoidan a good antioxidant agent. Down regulation of COX-2 expression in rabbit articular chondrocytes in a dose (0-100μg) and time (0-48h) dependent manner, unveiled its in vitro anti-inflammatory significance. In vivo carrageenan induced inflammatory rat model demonstrated a 68.19% inhibition of inflammation whereas an inflammation inhibition potential of 79.38% was recorded in anti-arthritic complete Freund's adjuvant-induced arthritic rat model. A substantial ameliorating effect on altered hematological and biochemical parameters in arthritic rats was also observed. Therefore, findings of the present study prospects fucoidan as a potential antioxidant that can effectively abrogate oxidative stress, edema and arthritis-mediated inflammation and mechanistic studies are recommended for observed activities.",
"title": ""
},
{
"docid": "ca007347ba943d279157b21794ac3871",
"text": "Multiple-choice items are one of the most commonly used tools for evaluating students' knowledge and skills. A key aspect of this type of assessment is the presence of functioning distractors, i.e., incorrect alternatives intended to be plausible for students with lower achievement. To our knowledge, no work has investigated the relationship between distractor performance and the complexity of the cognitive task required to give the correct answer. The aim of this study was to investigate this relation, employing the first three levels of Bloom's taxonomy (Knowledge, Comprehension, and Application). Specifically, it was hypothesized that items classified into a higher level of Bloom's classification would show a greater number of functioning distractors. The study involved 174 items administered to a sample of 848 undergraduate psychology students during their statistics exam. Each student received 30 items randomly selected from the 174-item pool. The bivariate results mainly supported the authors' hypothesis: the highest percentage of functioning distractors was observed among the items classified into the Application category (η2 = 0.024 and Phi = 0.25 for the dichotomized measure). When the analysis controlled for other item features, it lost statistical significance, partly because of the confounding effect of item difficulty.",
"title": ""
},
{
"docid": "a6fbd3f79105fd5c9edfc4a0292a3729",
"text": "The widespread use of templates on the Web is considered harmful for two main reasons. Not only do they compromise the relevance judgment of many web IR and web mining methods such as clustering and classification, but they also negatively impact the performance and resource usage of tools that process web pages. In this paper we present a new method that efficiently and accurately removes templates found in collections of web pages. Our method works in two steps. First, the costly process of template detection is performed over a small set of sample pages. Then, the derived template is removed from the remaining pages in the collection. This leads to substantial performance gains when compared to previous approaches that combine template detection and removal. We show, through an experimental evaluation, that our approach is effective for identifying terms occurring in templates - obtaining F-measure values around 0.9, and that it also boosts the accuracy of web page clustering and classification methods.",
"title": ""
},
{
"docid": "d9de6a277eec1156e680ee6f656cea10",
"text": "Research in the areas of organizational climate and work performance was used to develop a framework for measuring perceptions of safety at work. The framework distinguished perceptions of the work environment from perceptions of performance related to safety. Two studies supported application of the framework to employee perceptions of safety in the workplace. Safety compliance and safety participation were distinguished as separate components of safety-related performance. Perceptions of knowledge about safety and motivation to perform safely influenced individual reports of safety performance and also mediated the link between safety climate and safety performance. Specific dimensions of safety climate were identified and constituted a higher order safety climate factor. The results support conceptualizing safety climate as an antecedent to safety performance in organizations.",
"title": ""
},
{
"docid": "068295e6848b3228d1f25be84c9bf566",
"text": "We describe an automated system for the large-scale monitoring of Web sites that serve as online storefronts for spam-advertised goods. Our system is developed from an extensive crawl of black-market Web sites that deal in illegal pharmaceuticals, replica luxury goods, and counterfeit software. The operational goal of the system is to identify the affiliate programs of online merchants behind these Web sites; the system itself is part of a larger effort to improve the tracking and targeting of these affiliate programs. There are two main challenges in this domain. The first is that appearances can be deceiving: Web pages that render very differently are often linked to the same affiliate program of merchants. The second is the difficulty of acquiring training data: the manual labeling of Web pages, though necessary to some degree, is a laborious and time-consuming process. Our approach in this paper is to extract features that reveal when Web pages linked to the same affiliate program share a similar underlying structure. Using these features, which are mined from a small initial seed of labeled data, we are able to profile the Web sites of forty-four distinct affiliate programs that account, collectively, for hundreds of millions of dollars in illicit e-commerce. Our work also highlights several broad challenges that arise in the large-scale, empirical study of malicious activity on the Web.",
"title": ""
},
{
"docid": "72e4984c05e6b68b606775bbf4ce3b33",
"text": "This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (F , sentences 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.",
"title": ""
},
{
"docid": "dcaa36372cdc34b12ae26875b90c5d56",
"text": "This paper presents two different implementations of four Quadrant CMOS Analog Multiplier Circuits. The Multipliers are designed in current mode. Current squarer and translinear loops are the basic blocks for both the structures in realization of mathematical equations. The structures have simplicity in implementation. The proposed multiplier structures are designed in implementing in 180 nm CMOS technology with a supply of 1.8 V & 1.2 V resp. The structures have frequency bandwidth of 493 MHz & 75 MHz with a power consumption of 146.78μW & 36.08μW respectively.",
"title": ""
},
{
"docid": "ac15d2b4d14873235fe6e4d2dfa84061",
"text": "Despite strong popular conceptions of gender differences in emotionality and striking gender differences in the prevalence of disorders thought to involve emotion dysregulation, the literature on the neural bases of emotion regulation is nearly silent regarding gender differences (Gross, 2007; Ochsner & Gross, in press). The purpose of the present study was to address this gap in the literature. Using functional magnetic resonance imaging, we asked male and female participants to use a cognitive emotion regulation strategy (reappraisal) to down-regulate their emotional responses to negatively valenced pictures. Behaviorally, men and women evidenced comparable decreases in negative emotion experience. Neurally, however, gender differences emerged. Compared with women, men showed (a) lesser increases in prefrontal regions that are associated with reappraisal, (b) greater decreases in the amygdala, which is associated with emotional responding, and (c) lesser engagement of ventral striatal regions, which are associated with reward processing. We consider two non-competing explanations for these differences. First, men may expend less effort when using cognitive regulation, perhaps due to greater use of automatic emotion regulation. Second, women may use positive emotions in the service of reappraising negative emotions to a greater degree. We then consider the implications of gender differences in emotion regulation for understanding gender differences in emotional processing in general, and gender differences in affective disorders.",
"title": ""
},
{
"docid": "ba66e377db4ef2b3c626a0a2f19da8c3",
"text": "A challenging aspect of scene text recognition is to handle text with distortions or irregular layout. In particular, perspective text and curved text are common in natural scenes and are difficult to recognize. In this work, we introduce ASTER, an end-to-end neural network model that comprises a rectification network and a recognition network. The rectification network adaptively transforms an input image into a new one, rectifying the text in it. It is powered by a flexible Thin-Plate Spline transformation which handles a variety of text irregularities and is trained without human annotations. The recognition network is an attentional sequence-to-sequence model that predicts a character sequence directly from the rectified image. The whole model is trained end to end, requiring only images and their groundtruth text. Through extensive experiments, we verify the effectiveness of the rectification and demonstrate the state-of-the-art recognition performance of ASTER. Furthermore, we demonstrate that ASTER is a powerful component in end-to-end recognition systems, for its ability to enhance the detector.",
"title": ""
}
] |
scidocsrr
|
7e090c38dc57e611e59262636cd070fc
|
Electrostatic chuck with a thin ceramic insulation layer for wafer holding
|
[
{
"docid": "00068e4dc90e9bb9f3b8ca7f8e09f679",
"text": "In the semiconductor industry, many manufacturing processes, such as CVD or dry etching, are performed in vacuum condition. The electrostatic wafer chuck is the most preferable handling method under such circumstances. It enables retention of a wafer flat and enhanced heat transfer through the whole surface area because the wafer can firmly contact with the chuck. We have investigated the fundamental study of an electrostatic chuck with comb type electrodes and a thin dielectric film. In order to remove the air gap between them, silicone oil is used as a filler to prevent breakdown. The experimental results proved the potential to use the electrostatic chuck for silicon wafer handling. There, however, is a problem which comes from using silicone oil as an insulating filler. The thin dielectric film is easily deformed by tension when the object starts moving. In this report experimental results of the electrostatic wafer chuck are shown when insulating sealant, instead of silicone oil, is used. The electrostatic force acting on the 4 inch silicon wafer is examined with several types of sealant and dielectric films. The electrostatic force increased with the square of the applied voltage for lower voltage and gradually saturated at higher voltage, and the maximum force obtained was approximately 30 N.",
"title": ""
}
] |
[
{
"docid": "718a38a546de2dba3233607d7652c94a",
"text": "In modern power converter circuits, freewheeling diode snappy recovery phenomenon (voltage snap-off) can ultimately destroy the insulated gate bipolar transistor (IGBT) during turn-on and cause a subsequent circuit failure. In this paper, snappy recovery of modern fast power diodes is investigated with the aid of semiconductor device simulation tools, and experimental test results. The work presented here confirms that the reverse recovery process can by expressed by means of diode capacitive effects which influence the reverse recovery characteristics and determine if the diode exhibits soft or snappy recovery behavior. From the experimental and simulation results, a clear view is obtained for the physical process, causes and device/circuit conditions at which snap-off occurs. The analysis is based on the effect of both device and external operating parameters on the excess minority carrier distributions before and during the reverse recovery transient period.",
"title": ""
},
{
"docid": "df0be45b6db0de70acb6bbf44e7898aa",
"text": "The paper focuses on conservation agriculture (CA), defined as minimal soil disturbance (no-till, NT) and permanent soil cover (mulch) combined with rotations, as a more sustainable cultivation system for the future. Cultivation and tillage play an important role in agriculture. The benefits of tillage in agriculture are explored before introducing conservation tillage (CT), a practice that was borne out of the American dust bowl of the 1930s. The paper then describes the benefits of CA, a suggested improvement on CT, where NT, mulch and rotations significantly improve soil properties and other biotic factors. The paper concludes that CA is a more sustainable and environmentally friendly management system for cultivating crops. Case studies from the rice-wheat areas of the Indo-Gangetic Plains of South Asia and the irrigated maize-wheat systems of Northwest Mexico are used to describe how CA practices have been used in these two environments to raise production sustainably and profitably. Benefits in terms of greenhouse gas emissions and their effect on global warming are also discussed. The paper concludes that agriculture in the next decade will have to sustainably produce more food from less land through more efficient use of natural resources and with minimal impact on the environment in order to meet growing population demands. Promoting and adopting CA management systems can help meet this goal.",
"title": ""
},
{
"docid": "170f14fbf337186c8bd9f36390916d2e",
"text": "In this paper, we draw upon two sets of theoretical resources to develop a comprehensive theory of sexual offender rehabilitation named the Good Lives Model-Comprehensive (GLM-C). The original Good Lives Model (GLM-O) forms the overarching values and principles guiding clinical practice in the GLM-C. In addition, the latest sexual offender theory (i.e., the Integrated Theory of Sexual Offending; ITSO) provides a clear etiological grounding for these principles. The result is a more substantial and improved rehabilitation model that is able to conceptually link latest etiological theory with clinical practice. Analysis of the GLM-C reveals that it also has the theoretical resources to secure currently used self-regulatory treatment practice within a meaningful structure. D 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dcf4de4629be22628f5b226a1dcee856",
"text": "Paper prototyping offers unique affordances for interface design. However, due to its spontaneous nature and the limitations of paper, it is difficult to distill and communicate a paper prototype design and its user test findings to a wide audience. To address these issues, we created FrameWire, a computer vision-based system that automatically extracts interaction flows from the video recording of paper prototype user tests. Based on the extracted logic, FrameWire offers two distinct benefits for designers: a structural view of the video recording that allows a designer or a stakeholder to easily distill and understand the design concept and user interaction behaviors, and automatic generation of interactive HTML-based prototypes that can be easily tested with a larger group of users as well as \"walked through\" by other stakeholders. The extraction is achieved by automatically aggregating video frame sequences into an interaction flow graph based on frame similarities and a designer-guided clustering process. The results of evaluating FrameWire with realistic paper prototyping tests show that our extraction approach is feasible and FrameWire is a promising tool for enhancing existing prototyping practice.",
"title": ""
},
{
"docid": "a47d9d5ddcd605755eb60d5499ad7f7a",
"text": "This paper presents a 14MHz Class-E power amplifier to be used for wireless power transmission. The Class-E power amplifier was built to consider the VSWR and the frequency bandwidth. Tw o kinds of circuits were designed: the high and low quality factor amplifiers. The low quality factor amplifier is confirmed to have larger bandwidth than the high quality factor amplifier. It has also possessed less sensitive characteristics. Therefore, the low quality factor amplifier circuit was adopted and tested. The effect of gate driving input source is studied. The efficiency of the Class-E amplifier reaches 85.5% at 63W.",
"title": ""
},
{
"docid": "a81004b3fc39a66d93811841c6d42ff0",
"text": "Failing to properly isolate components in the same address space has resulted in a substantial amount of vulnerabilities. Enforcing the least privilege principle for memory accesses can selectively isolate software components to restrict attack surface and prevent unintended cross-component memory corruption. However, the boundaries and interactions between software components are hard to reason about and existing approaches have failed to stop attackers from exploiting vulnerabilities caused by poor isolation. We present the secure memory views (SMV) model: a practical and efficient model for secure and selective memory isolation in monolithic multithreaded applications. SMV is a third generation privilege separation technique that offers explicit access control of memory and allows concurrent threads within the same process to partially share or fully isolate their memory space in a controlled and parallel manner following application requirements. An evaluation of our prototype in the Linux kernel (TCB < 1,800 LOC) shows negligible runtime performance overhead in real-world applications including Cherokee web server (< 0.69%), Apache httpd web server (< 0.93%), and Mozilla Firefox web browser (< 1.89%) with at most 12 LOC changes.",
"title": ""
},
{
"docid": "5d21e654b54571d2eaf4714b43019ed5",
"text": "Data visualization is the process of representing data as pictures to support reasoning about the underlying data. For the interpretation to be as easy as possible, we need to be as close as possible to the original data. As most visualization tools have an internal meta-model, which is different from the one for the presented data, they usually need to duplicate the original data to conform to their meta-model. This leads to an increase in the resources needed, increase which is not always justified. In this work we argue for the need of having an engine that is as close as possible to the data and we present our solution of moving the visualization tool to the data, instead of moving the data to the visualization tool. Our solution also emphasizes the necessity of reusing basic blocks to express complex visualizations and allowing the programmer to script the visualization using his preferred tools, rather than a third party format. As a validation of the expressiveness of our framework, we show how we express several already published visualizations and describe the pros and cons of the approach.",
"title": ""
},
{
"docid": "2582b0fffad677d3f0ecf11b92d9702d",
"text": "This study explores teenage girls' narrations of the relationship between self-presentation and peer comparison on social media in the context of beauty. Social media provide new platforms that manifest media and peer influences on teenage girls' understanding of beauty towards an idealized notion. Through 24 in-depth interviews, this study examines secondary school girls' self-presentation and peer comparison behaviors on social network sites where the girls posted self-portrait photographs or “selfies” and collected peer feedback in the forms of “likes,” “followers,” and comments. Results of thematic analysis reveal a gap between teenage girls' self-beliefs and perceived peer standards of beauty. Feelings of low self-esteem and insecurity underpinned their efforts in edited self-presentation and quest for peer recognition. Peers played multiple roles that included imaginary audiences, judges, vicarious learning sources, and comparison targets in shaping teenage girls' perceptions and presentation of beauty. Findings from this study reveal the struggles that teenage girls face today and provide insights for future investigations and interventions pertinent to teenage girls’ presentation and evaluation of self on",
"title": ""
},
{
"docid": "dcf7214c15c13f13d33c9a7b2c216588",
"text": "Many machine learning tasks such as multiple instance learning, 3D shape recognition and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the permutation of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating increased performance compared to recent methods for set-structured data.",
"title": ""
},
{
"docid": "bdc614429426f5ad5aeaa73695d58285",
"text": "Multibaseline (MB) synthetic aperture radar (SAR) tomography is a promising mode of SAR interferometry, allowing full 3-D imaging of volumetric and layover scatterers in place of a single elevation estimation capability for each SAR cell However, Fourier-based MB SAR tomography is generally affected by unsatisfactory imaging quality due to a typically low number of baselines with irregular distribution. In this paper, we improve the basic elevation focusing technique by reconstructing a set of uniform baselines data exploiting in the interpolation step the ancillary information about the extension of a height sector which contains all the scatterers. This a priori information can be derived from the knowledge of the kind of the observed scenario (e.g., forest or urban). To demonstrate the concept, an imaging enhancement analysis is carried out by simulation.",
"title": ""
},
{
"docid": "6974bf94292b51fc4efd699c28c90003",
"text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.",
"title": ""
},
{
"docid": "de1ed7fbb69e5e33e17d1276d265a3e1",
"text": "Abnormal glucose metabolism and enhanced oxidative stress accelerate cardiovascular disease, a chronic inflammatory condition causing high morbidity and mortality. Here, we report that in monocytes and macrophages of patients with atherosclerotic coronary artery disease (CAD), overutilization of glucose promotes excessive and prolonged production of the cytokines IL-6 and IL-1β, driving systemic and tissue inflammation. In patient-derived monocytes and macrophages, increased glucose uptake and glycolytic flux fuel the generation of mitochondrial reactive oxygen species, which in turn promote dimerization of the glycolytic enzyme pyruvate kinase M2 (PKM2) and enable its nuclear translocation. Nuclear PKM2 functions as a protein kinase that phosphorylates the transcription factor STAT3, thus boosting IL-6 and IL-1β production. Reducing glycolysis, scavenging superoxide and enforcing PKM2 tetramerization correct the proinflammatory phenotype of CAD macrophages. In essence, PKM2 serves a previously unidentified role as a molecular integrator of metabolic dysfunction, oxidative stress and tissue inflammation and represents a novel therapeutic target in cardiovascular disease.",
"title": ""
},
{
"docid": "c443ca07add67d6fc0c4901e407c68f2",
"text": "This paper proposes a compiler-based programming framework that automatically translates user-written structured grid code into scalable parallel implementation code for GPU-equipped clusters. To enable such automatic translations, we design a small set of declarative constructs that allow the user to express stencil computations in a portable and implicitly parallel manner. Our framework translates the user-written code into actual implementation code in CUDA for GPU acceleration and MPI for node-level parallelization with automatic optimizations such as computation and communication overlapping. We demonstrate the feasibility of such automatic translations by implementing several structured grid applications in our framework. Experimental results on the TSUBAME2.0 GPU-based supercomputer show that the performance is comparable as hand-written code and good strong and weak scalability up to 256 GPUs.",
"title": ""
},
{
"docid": "eba9ec47b04e08ff2606efa9ffebb6f8",
"text": "OBJECTIVE\nThe incidence of neuroleptic malignant syndrome (NMS) is not known, but the frequency of its occurrence with conventional antipsychotic agents has been reported to vary from 0.02% to 2.44%.\n\n\nDATA SOURCES\nMEDLINE search conducted in January 2003 and review of references within the retrieved articles.\n\n\nDATA SYNTHESIS\nOur MEDLINE research yielded 68 cases (21 females and 47 males) of NMS associated with atypical antipsychotic drugs (clozapine, N = 21; risperidone, N = 23; olanzapine, N = 19; and quetiapine, N = 5). The fact that 21 cases of NMS with clozapine were found indicates that low occurrence of extrapyramidal symptoms (EPS) and low EPS-inducing potential do not prevent the occurrence of NMS and D(2) dopamine receptor blocking potential does not have direct correlation with the occurrence of NMS. One of the cardinal features of NMS is an increasing manifestation of EPS, and the conventional antipsychotic drugs are known to produce EPS in 95% or more of NMS cases. With atypical antipsychotic drugs, the incidence of EPS during NMS is of a similar magnitude.\n\n\nCONCLUSIONS\nFor NMS associated with atypical antipsychotic drugs, the mortality rate was lower than that with conventional antipsychotic drugs. However, the mortality rate may simply be a reflection of physicians' awareness and ensuing early treatment.",
"title": ""
},
{
"docid": "91f8e39777636124d449d1f2829f47de",
"text": "We propose CAEMSI, a cross-domain analytic evaluation methodology for Style Imitation (SI) systems, based on a set of statistical significance tests that allow hypotheses comparing two corpora to be tested. Typically, SI systems are evaluated using human participants, however, this type of approach has several weaknesses. For humans to provide reliable assessments of an SI system, they must possess a sufficient degree of domain knowledge, which can place significant limitations on the pool of participants. Furthermore, both human bias against computer-generated artifacts, and the variability of participants’ assessments call the reliability of the results into question. Most importantly, the use of human participants places limitations on the number of generated artifacts and SI systems which can be feasibly evaluated. Directly motivated by these shortcomings, CAEMSI provides a robust and scalable approach to the evaluation problem. Normalized Compression Distance, a domain-independent distance metric, is used to measure the distance between individual artifacts within a corpus. The difference between corpora is measured using test statistics derived from these inter-artifact distances, and permutation testing is used to determine the significance of the difference. We provide empirical evidence validating the statistical significance tests, using datasets from two distinct domains.",
"title": ""
},
{
"docid": "0e153353fb8af1511de07c839f6eaca5",
"text": "The calculation of a transformer's parasitics, such as its self capacitance, is fundamental for predicting the frequency behavior of the device, reducing this capacitance value and moreover for more advanced aims of capacitance integration and cancellation. This paper presents a comprehensive procedure for calculating all contributions to the self-capacitance of high-voltage transformers and provides a detailed analysis of the problem, based on a physical approach. The advantages of the analytical formulation of the problem rather than a finite element method analysis are discussed. The approach and formulas presented in this paper can also be used for other wound components rather than just step-up transformers. Finally, analytical and experimental results are presented for three different high-voltage transformer architectures.",
"title": ""
},
{
"docid": "3bb97e1573bdec1dd84f38a41d041abd",
"text": "This paper presents a case study of the design of the IT Module-Based Test Automation Framework (ITAF). ITAF is designed to allow IT project teams to share a standardized test automation framework or one of its modules within the.NET technology. This framework allows the project teams to easily automate the test cases, improve the efficiency and productivity, and reuse code and resources. The framework is extensible so that users can contribute and add value to it. Each module of the framework represents one typical technology or an application layer. Each module can be decoupled from the framework and used independently to fulfill an automation goal for a specific type of application.",
"title": ""
},
{
"docid": "92da117d31574246744173b339b0d055",
"text": "We present a method for gesture detection and localization based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at two temporal scales. Key to our technique is a training strategy which exploits i) careful initialization of individual modalities; and ii) gradual fusion of modalities from strongest to weakest cross-modality structure. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams.",
"title": ""
},
{
"docid": "4d178a58cfbf0b9441f5b707ae3e7a3f",
"text": "Allergic contact cheilitis caused by olaflur in toothpaste Anton de Groot1, Ron Tupker2, Diny Hissink3 and Marjolijn Woutersen4 1Acdegroot Publishing, 8351 HV Wapserveen, The Netherlands, 2Department of Dermatology, Sint Antonius Hospital, 3435 CM Nieuwegein, The Netherlands, 3Consumer and Safety Division, Netherlands Food and Consumer Product Safety Authority, 3511 GG Utrecht, The Netherlands, and 4Centre for Safety of Substances and Products, National Institute for Public Health and the Environment, 3721 MA Bilthoven, The Netherlands",
"title": ""
},
{
"docid": "72dc3957db058654d60b590202aba68a",
"text": "Inverted pendulum system is a typical rapid, multivariable, nonlinear, absolute instability and non-minimum phase system, and it is a favorite problem in the field of control theory and application. In its control, the current main control method includes in fuzzy control, variable structure control and robust control etc. For fuzzy control of a double inverted pendulum, the research is focused on how to solve the “rule explosion” problem. The model and characteristics of the system are detailed analyzed; a status fusion function is designed using information fusion. By using it, the output variables of the system with six dimensions is synthesized as two variables: error and variation of error. From the fuzzy control theory, we also design the fuzzy controller of the double inverted pendulum system in MATLAB, and carried out the system simulation in Simulink, results show that the method is feasible.",
"title": ""
}
] |
scidocsrr
|
b2a7bd25c806c9f6dd66f2b6fa66764d
|
CyCADA: Cycle-Consistent Adversarial Domain Adaptation
|
[
{
"docid": "35625f248c81ebb5c20151147483f3f6",
"text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.",
"title": ""
},
{
"docid": "9bb86141611c54978033e2ea40f05b15",
"text": "In this work we investigate the problem of road scene semanti c segmentation using Deconvolutional Networks (DNs). Several c onstraints limit the practical performance of DNs in this context: firstly, the pa ucity of existing pixelwise labelled training data, and secondly, the memory const rai ts of embedded hardware, which rule out the practical use of state-of-theart DN architectures such as fully convolutional networks (FCN). To address the fi rst constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (M DRS3) dataset, aggregating data from six existing densely and sparsely lab elled datasets for training our models, and two existing, separate datasets for test ing their generalisation performance. We show that, while MDRS3 offers a greater volu me and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to over c me this, based on (i) the creation of a best-possible source network (S-Net ) from the aggregated data, ignoring time and memory constraints; and (ii) the tra nsfer of knowledge from S-Net to the memory-efficient target network (T-Net). W e evaluate different techniques for S-Net creation and T-Net transferral, and de monstrate that training a constrained deconvolutional network in this manner can un lock better performance than existing training approaches. Specifically, we s how that a target network can be trained to achieve improved accuracy versus an FC N despite using less than 1% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scar ce o fragmented and where practical constraints exist on the desired model size . We make available our network models and aggregated multi-domain dataset for reproducibility.",
"title": ""
}
] |
[
{
"docid": "3bc1a34b361c4356f69d084e0db54b9e",
"text": "Predicting program properties such as names or expression types has a wide range of applications. It can ease the task of programming, and increase programmer productivity. A major challenge when learning from programs is how to represent programs in a way that facilitates effective learning. \n We present a general path-based representation for learning from programs. Our representation is purely syntactic and extracted automatically. The main idea is to represent a program using paths in its abstract syntax tree (AST). This allows a learning model to leverage the structured nature of code rather than treating it as a flat sequence of tokens. \n We show that this representation is general and can: (i) cover different prediction tasks, (ii) drive different learning algorithms (for both generative and discriminative models), and (iii) work across different programming languages. \n We evaluate our approach on the tasks of predicting variable names, method names, and full types. We use our representation to drive both CRF-based and word2vec-based learning, for programs of four languages: JavaScript, Java, Python and C#. Our evaluation shows that our approach obtains better results than task-specific handcrafted representations across different tasks and programming languages.",
"title": ""
},
{
"docid": "44050ba52838a583e2efb723b10f0234",
"text": "This paper presents a novel approach to the reconstruction of geometric models and surfaces from given sets of points using volume splines. It results in the representation of a solid by the inequality The volume spline is based on use of the Green’s function for interpolation of scalar function values of a chosen “carrier” solid. Our algorithm is capable of generating highly concave and branching objects automatically. The particular case where the surface is reconstructed from cross-sections is discussed too. Potential applications of this algorithm are in tomography, image processing, animation and CAD f o r bodies with complex surfaces.",
"title": ""
},
{
"docid": "777d4e55f3f0bbb0544130931006b237",
"text": "Spatial pyramid matching is a standard architecture for categorical image retrieval. However, its performance is largely limited by the prespecified rectangular spatial regions when pooling local descriptors. In this paper, we propose to learn object-shaped and directional receptive fields for image categorization. In particular, different objects in an image are seamlessly constructed by superpixels, while the direction captures human gaze shifting path. By generating a number of superpixels in each image, we construct graphlets to describe different objects. They function as the object-shaped receptive fields for image comparison. Due to the huge number of graphlets in an image, a saliency-guided graphlet selection algorithm is proposed. A manifold embedding algorithm encodes graphlets with the semantics of training image tags. Then, we derive a manifold propagation to calculate the postembedding graphlets by leveraging visual saliency maps. The sequentially propagated graphlets constitute a path that mimics human gaze shifting. Finally, we use the learned graphlet path as receptive fields for local image descriptor pooling. The local descriptors from similar receptive fields of pairwise images more significantly contribute to the final image kernel. Thorough experiments demonstrate the advantage of our approach.",
"title": ""
},
{
"docid": "c0890c01e51ddedf881cd3d110efa6e2",
"text": "A residual networks family with hundreds or even thousands of layers dominates major image recognition tasks, but building a network by simply stacking residual blocks inevitably limits its optimization ability. This paper proposes a novel residual network architecture, residual networks of residual networks (RoR), to dig the optimization ability of residual networks. RoR substitutes optimizing residual mapping of residual mapping for optimizing original residual mapping. In particular, RoR adds levelwise shortcut connections upon original residual networks to promote the learning capability of residual networks. More importantly, RoR can be applied to various kinds of residual networks (ResNets, Pre-ResNets, and WRN) and significantly boost their performance. Our experiments demonstrate the effectiveness and versatility of RoR, where it achieves the best performance in all residual-network-like structures. Our RoR-3-WRN58-4 + SD models achieve new state-of-the-art results on CIFAR-10, CIFAR-100, and SVHN, with the test errors of 3.77%, 19.73%, and 1.59%, respectively. RoR-3 models also achieve state-of-the-art results compared with ResNets on the ImageNet data set.",
"title": ""
},
{
"docid": "30a57dc7d69b302219e05918b874d2b2",
"text": "In recent years, flight delay problem blocks the development of the civil aviation industry all over the world. And delay propagation always is a main factor that impacts the flight's delay. All kinds of delays often happen in nearly-saturated or overloaded airports. This paper we take one busy hub-airport as the main research object to estimate the arrival delay in this airport, and to discuss the influence of propagation within and from this airport. First, a delay propagation model is described qualitatively in mathematics after sorting and analyzing the relationships between all flights, especially focused on the frequently type, named aircraft correlation. Second, an arrival delay model is established based on Bayesian network. By training the model, the arrival delay in this airport can be estimated. Third, after clarifying the arrival status of one airport, the impact from propagation of arrival delays within and from this busy airport is discussed, especially between the flights belonging to one same air company. All the data used in our experiments is come from real records, for the industry secret, the name of the airport and the air company is hidden.",
"title": ""
},
{
"docid": "37a4b2d15a29132efa362b4de8f259fc",
"text": "Without the need of any transmission line, a very compact decoupling network based on reactive lumped elements is presented for a two-element closely spaced array. The lumped network, consisting of two series and four shunt elements, can be analytically designed using the even-odd mode analysis. In the even mode, the half-circuit of the decoupling network is identical to an L-section matching network, while in the odd mode it is equivalent to a π-section one. The proposed decoupling network can deal with the matching conditions of the even and odd modes independently so as to simultaneously achieve good impedance matching and port isolation of the whole antenna array. The design principle, formulation, and experimental results including the radiation characteristics are introduced.",
"title": ""
},
{
"docid": "4706560ae6318724e6eb487d23804a76",
"text": "Schizophrenia is a complex neurodevelopmental disorder characterized by cognitive deficits. These deficits in cognitive functioning have been shown to relate to a variety of functional and treatment outcomes. Cognitive adaptation training (CAT) is a home-based, manual-driven treatment that utilizes environmental supports and compensatory strategies to bypass cognitive deficits and improve target behaviors and functional outcomes in individuals with schizophrenia. Unlike traditional case management, CAT provides environmental supports and compensatory strategies tailored to meet the behavioral style and neurocognitive deficits of each individual patient. The case of Ms. L. is presented to illustrate CAT treatment.",
"title": ""
},
{
"docid": "cdf78bab8d93eda7ccbb41674d24b1a2",
"text": "OBJECTIVE\nThe U.S. Food and Drug Administration and Institute of Medicine are currently investigating front-of-package (FOP) food labelling systems to provide science-based guidance to the food industry. The present paper reviews the literature on FOP labelling and supermarket shelf-labelling systems published or under review by February 2011 to inform current investigations and identify areas of future research.\n\n\nDESIGN\nA structured search was undertaken of research studies on consumer use, understanding of, preference for, perception of and behaviours relating to FOP/shelf labelling published between January 2004 and February 2011.\n\n\nRESULTS\nTwenty-eight studies from a structured search met inclusion criteria. Reviewed studies examined consumer preferences, understanding and use of different labelling systems as well as label impact on purchasing patterns and industry product reformulation.\n\n\nCONCLUSIONS\nThe findings indicate that the Multiple Traffic Light system has most consistently helped consumers identify healthier products; however, additional research on different labelling systems' abilities to influence consumer behaviour is needed.",
"title": ""
},
{
"docid": "458dacc4d32c5a80bd88b88bf537e50e",
"text": "The aim of the study is to investigate the spiritual intelligence role in predicting Quchan University students’ quality of life. In order to collect data, a sample of 143 students of Quechan University was selected randomly enrolled for 89–90 academic year. The instruments of the data collecting are World Health Organization Quality of Life (WHOQOL) and Spiritual Intelligence Questionnaire. For analyzing the data, the standard deviation, and Pearson’s correlation coefficient in descriptive level, and in inferential level, the regression test was used. The results of the study show that the spiritual intelligence has effective role on predicting quality of life.",
"title": ""
},
{
"docid": "c3f943da2d68ee7980972a77c685fde6",
"text": "*Correspondence: [email protected] Department of Mathematics and Applied Mathematics, University of the Western Cape, Private Bag X17, Bellville, 7535, Republic of South Africa Abstract Antiretroviral treatment (ART) and oral pre-exposure prophylaxis (PrEP) have recently been used efficiently in management of HIV infection. Pre-exposure prophylaxis consists in the use of an antiretroviral medication to prevent the acquisition of HIV infection by uninfected individuals. We propose a new model for the transmission of HIV/AIDS including ART and PrEP. Our model can be used to test the effects of ART and of the uptake of PrEP in a given population, as we demonstrate through simulations. The model can also be used to estimate future projections of HIV prevalence. We prove global stability of the disease-free equilibrium. We also prove global stability of the endemic equilibrium for the most general case of the model, i.e., which allows for PrEP individuals to default. We include insightful simulations based on recently published South-African data.",
"title": ""
},
{
"docid": "fb37da1dc9d95501e08d0a29623acdab",
"text": "This study evaluates various evolutionary search methods to direct neural controller evolution in company with policy (behavior) transfer across increasingly complex collective robotic (RoboCup keep-away) tasks. Robot behaviors are first evolved in a source task and then transferred for further evolution to more complex target tasks. Evolutionary search methods tested include objective-based search (fitness function), behavioral and genotypic diversity maintenance, and hybrids of such diversity maintenance and objective-based search. Evolved behavior quality is evaluated according to effectiveness and efficiency. Effectiveness is the average task performance of transferred and evolved behaviors, where task performance is the average time the ball is controlled by a keeper team. Efficiency is the average number of generations taken for the fittest evolved behaviors to reach a minimum task performance threshold given policy transfer. Results indicate that policy transfer coupled with hybridized evolution (behavioral diversity maintenance and objective-based search) addresses the bootstrapping problem for increasingly complex keep-away tasks. That is, this hybrid method (coupled with policy transfer) evolves behaviors that could not otherwise be evolved. Also, this hybrid evolutionary search was demonstrated as consistently evolving topologically simple neural controllers that elicited high-quality behaviors.",
"title": ""
},
{
"docid": "39188ae46f22dd183f356ba78528b720",
"text": "Systemic risk is a key concern for central banks charged with safeguarding overall financial stability. In this paper we investigate how systemic risk is affected by the structure of the financial system. We construct banking systems that are composed of a number of banks that are connected by interbank linkages. We then vary the key parameters that define the structure of the financial system — including its level of capitalisation, the degree to which banks are connected, the size of interbank exposures and the degree of concentration of the system — and analyse the influence of these parameters on the likelihood of contagious (knock-on) defaults. First, we find that the better capitalised banks are, the more resilient is the banking system against contagious defaults and this effect is non-linear. Second, the effect of the degree of connectivity is non-monotonic, that is, initially a small increase in connectivity increases the contagion effect; but after a certain threshold value, connectivity improves the ability of a banking system to absorb shocks. Third, the size of interbank liabilities tends to increase the risk of knock-on default, even if banks hold capital against such exposures. Fourth, more concentrated banking systems are shown to be prone to larger systemic risk, all else equal. In an extension to the main analysis we study how liquidity effects interact with banking structure to produce a greater chance of systemic breakdown. We finally consider how the risk of contagion might depend on the degree of asymmetry (tiering) inherent in the structure of the banking system. A number of our results have important implications for public policy, which this paper also draws out.",
"title": ""
},
{
"docid": "c61e25e5896ff588764639b6a4c18d2e",
"text": "Social media is continually emerging as a platform of information exchange around health challenges. We study mental health discourse on the popular social media: reddit. Building on findings about health information seeking and sharing practices in online forums, and social media like Twitter, we address three research challenges. First, we present a characterization of self-disclosure in mental illness communities on reddit. We observe individuals discussing a variety of concerns ranging from the daily grind to specific queries about diagnosis and treatment. Second, we build a statistical model to examine the factors that drive social support on mental health reddit communities. We also develop language models to characterize mental health social support, which are observed to bear emotional, informational, instrumental, and prescriptive information. Finally, we study disinhibition in the light of the dissociative anonymity that reddit’s throwaway accounts provide. Apart from promoting open conversations, such anonymity surprisingly is found to gather feedback that is more involving and emotionally engaging. Our findings reveal, for the first time, the kind of unique information needs that a social media like reddit might be fulfilling when it comes to a stigmatic illness. They also expand our understanding of the role of the social web in behavioral therapy.",
"title": ""
},
{
"docid": "72f59a5342e3dc9d9c038fae8b9d4844",
"text": "Borromean rings or links are topologically complex assemblies of three entangled rings where no two rings are interlinked in a chain-like catenane, yet the three rings cannot be separated. We report here a metallacycle complex whose crystalline network forms the first example of a new class of entanglement. The complex is formed from the self-assembly of CuBr2 with the cyclotriveratrylene-scaffold ligand (±)-tris(iso-nicotinoyl)cyclotriguaiacylene. Individual metallacycles are interwoven into a two-dimensional chainmail network where each metallacycle exhibits multiple Borromean-ring-like associations with its neighbours. This only occurs in the solid state, and also represents the first example of a crystalline infinite chainmail two-dimensional network. Crystals of the complex were twinned and have an unusual hollow tubular morphology that is likely to result from a localized dissolution-recrystallization process.",
"title": ""
},
{
"docid": "ec69b95261fc19183a43c0e102f39016",
"text": "The selection of a surgical approach for the treatment of tibia plateau fractures is an important decision. Approximately 7% of all tibia plateau fractures affect the posterolateral corner. Displaced posterolateral tibia plateau fractures require anatomic articular reduction and buttress plate fixation on the posterior aspect. These aims are difficult to reach through a lateral or anterolateral approach. The standard posterolateral approach with fibula osteotomy and release of the posterolateral corner is a traumatic procedure, which includes the risk of fragment denudation. Isolated posterior approaches do not allow sufficient visual control of fracture reduction, especially if the fracture is complex. Therefore, the aim of this work was to present a surgical approach for posterolateral tibial plateau fractures that both protects the soft tissue and allows for good visual control of fracture reduction. The approach involves a lateral arthrotomy for visualizing the joint surface and a posterolateral approach for the fracture reduction and plate fixation, which are both achieved through one posterolateral skin incision. Using this approach, we achieved reduction of the articular surface and stable fixation in six of seven patients at the final follow-up visit. No complications and no loss of reduction were observed. Additionally, the new posterolateral approach permits direct visual exposure and facilitates the application of a buttress plate. Our approach does not require fibular osteotomy, and fragments of the posterolateral corner do not have to be detached from the soft tissue network.",
"title": ""
},
{
"docid": "8e28f1561b3a362b2892d7afa8f2164c",
"text": "Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an indepth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak “co-IP” relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improving inference efficiency, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.",
"title": ""
},
{
"docid": "a29a61f5ad2e4b44e8e3d11b471a0f06",
"text": "To ascertain by MRI the presence of filler injected into facial soft tissue and characterize complications by contrast enhancement. Nineteen volunteers without complications were initially investigated to study the MRI features of facial fillers. We then studied another 26 patients with clinically diagnosed filler-related complications using contrast-enhanced MRI. TSE-T1-weighted, TSE-T2-weighted, fat-saturated TSE-T2-weighted, and TIRM axial and coronal scans were performed in all patients, and contrast-enhanced fat-suppressed TSE-T1-weighted scans were performed in complicated patients, who were then treated with antibiotics. Patients with soft-tissue enhancement and those without enhancement but who did not respond to therapy underwent skin biopsy. Fisher’s exact test was used for statistical analysis. MRI identified and quantified the extent of fillers. Contrast enhancement was detected in 9/26 patients, and skin biopsy consistently showed inflammatory granulomatous reaction, whereas in 5/17 patients without contrast enhancement, biopsy showed no granulomas. Fisher’s exact test showed significant correlation (p < 0.001) between subcutaneous contrast enhancement and granulomatous reaction. Cervical lymph node enlargement (longitudinal axis >10 mm) was found in 16 complicated patients (65 %; levels IA/IB/IIA/IIB). MRI is a useful non-invasive tool for anatomical localization of facial dermal filler; IV gadolinium administration is advised in complicated cases for characterization of granulomatous reaction. • MRI is a non-invasive tool for facial dermal filler detection and localization. • MRI-criteria to evaluate complicated/non-complicated cases after facial dermal filler injections are defined. • Contrast-enhanced MRI detects subcutaneous inflammatory granulomatous reaction due to dermal filler. • 65 % patients with filler-related complications showed lymph-node enlargement versus 31.5 % without complications. • Lymph node enlargement involved cervical levels (IA/IB/IIA/IIB) that drained treated facial areas.",
"title": ""
},
{
"docid": "6821d4c1114e007453578dd90600db15",
"text": "Our goal is to assess the strategic and operational benefits of electronic integration for industrial procurement. We conduct a field study with an industrial supplier and examine the drivers of performance of the procurement process. Our research quantifies both the operational and strategic impacts of electronic integration in a B2B procurement environment for a supplier. Additionally, we show that the customer also obtains substantial benefits from efficient procurement transaction processing. We isolate the performance impact of technology choice and ordering processes on both the trading partners. A significant finding is that the supplier derives large strategic benefits when the customer initiates the system and the supplier enhances the system’s capabilities. With respect to operational benefits, we find that when suppliers have advanced electronic linkages, the order-processing system significantly increases benefits to both parties. (Business Value of IT; Empirical Assessment; Electronic Integration; Electronic Procurement; B2B; Strategic IT Impact; Operational IT Impact)",
"title": ""
},
{
"docid": "b9ef363fc7563dd14b3a4fd781d76d91",
"text": "Deep learning (DL)-based Reynolds stress with its capability to leverage values of large data can be used to close Reynolds-averaged Navier-Stoke (RANS) equations. Type I and Type II machine learning (ML) frameworks are studied to investigate data and flow feature requirements while training DL-based Reynolds stress. The paper presents a method, flow features coverage mapping (FFCM), to quantify the physics coverage of DL-based closures that can be used to examine the sufficiency of training data points as well as input flow features for data-driven turbulence models. Three case studies are formulated to demonstrate the properties of Type I and Type II ML. The first case indicates that errors of RANS equations with DL-based Reynolds stress by Type I ML are accumulated along with the simulation time when training data do not sufficiently cover transient details. The second case uses Type I ML to show that DL can figure out time history of flow transients from data sampled at various times. The case study also shows that the necessary and sufficient flow features of DL-based closures are first-order spatial derivatives of velocity fields. The last case demonstrates the limitation of Type II ML for unsteady flow simulation. Type II ML requires initial conditions to be sufficiently close to reference data. Then reference data can be used to improve RANS simulation.",
"title": ""
},
{
"docid": "692b11d9502fa7f9fc299e1a9addbfb3",
"text": "This paper presents the first version of the NIST Cloud Computing Reference Architecture (RA). This is a vendor neutral conceptual model that concentrates on the role and interactions of the identified actors in the cloud computing sphere. Five primary actors were identified Cloud Service Consumer, Cloud Service Provider, Cloud Broker, Cloud Auditor and Cloud Carrier. Their roles and activities are discussed in this report. A primary goal for generating this model was to give the United States Government (USG) a method for understanding and communicating the components of a cloud computing system for Federal IT executives, Program Managers and IT procurement officials. Keywords-component; cloud computing, reference architecture, Federal Government",
"title": ""
}
] |
scidocsrr
|
8d6345ae1dbe14185089ee6bb06dc57f
|
Learning from Examples as an Inverse Problem
|
[
{
"docid": "f51a854a390be7d6980b49aea2e955cf",
"text": "The purpose of this paper is to provide a PAC error analysis for the q-norm soft margin classifier, a support vector machine classification algorithm. It consists of two parts: regularization error and sample error. While many techniques are available for treating the sample error, much less is known for the regularization error and the corresponding approximation error for reproducing kernel Hilbert spaces. We are mainly concerned about the regularization error. It is estimated for general distributions by a K-functional in weighted L spaces. For weakly separable distributions (i.e., the margin may be zero) satisfactory convergence rates are provided by means of separating functions. A projection operator is introduced, which leads to better sample error estimates especially for small complexity kernels. The misclassification error is bounded by the V -risk associated with a general class of loss functions V . The difficulty of bounding the offset is overcome. Polynomial kernels and Gaussian kernels are used to demonstrate the main results. The choice of the regularization parameter plays an important role in our analysis.",
"title": ""
}
] |
[
{
"docid": "19607c362f07ebe0238e5940fefdf03f",
"text": "This paper presents an approach for generating photorealistic video sequences of dynamically varying facial expressions in human-agent interactions. To this end, we study human-human interactions to model the relationship and influence of one individual's facial expressions in the reaction of the other. We introduce a two level optimization of generative adversarial models, wherein the first stage generates a dynamically varying sequence of the agent's face sketch conditioned on facial expression features derived from the interacting human partner. This serves as an intermediate representation, which is used to condition a second stage generative model to synthesize high-quality video of the agent face. Our approach uses a novel L1 regularization term computed from layer features of the discriminator, which are integrated with the generator objective in the GAN model. Session constraints are also imposed on video frame generation to ensure appearance consistency between consecutive frames. We demonstrated that our model is effective at generating visually compelling facial expressions. Moreover, we quantitatively showed that agent facial expressions in the generated video clips reflect valid emotional reactions to behavior of the human partner.",
"title": ""
},
{
"docid": "57a23f68303a3694e4e6ba66e36f7015",
"text": "OBJECTIVE\nTwo studies using cross-sectional designs explored four possible mechanisms by which loneliness may have deleterious effects on health: health behaviors, cardiovascular activation, cortisol levels, and sleep.\n\n\nMETHODS\nIn Study 1, we assessed autonomic activity, salivary cortisol levels, sleep quality, and health behaviors in 89 undergraduate students selected based on pretests to be among the top or bottom quintile in feelings of loneliness. In Study 2, we assessed blood pressure, heart rate, salivary cortisol levels, sleep quality, and health behaviors in 25 older adults whose loneliness was assessed at the time of testing at their residence.\n\n\nRESULTS\nTotal peripheral resistance was higher in lonely than nonlonely participants, whereas cardiac contractility, heart rate, and cardiac output were higher in nonlonely than lonely participants. Lonely individuals also reported poorer sleep than nonlonely individuals. Study 2 indicated greater age-related increases in blood pressure and poorer sleep quality in lonely than nonlonely older adults. Mean salivary cortisol levels and health behaviors did not differ between groups in either study.\n\n\nCONCLUSIONS\nResults point to two potentially orthogonal predisease mechanisms that warrant special attention: cardiovascular activation and sleep dysfunction. Health behavior and cortisol regulation, however, may require more sensitive measures and large sample sizes to discern their roles in loneliness and health.",
"title": ""
},
{
"docid": "e4892dfe4da663c4044a78a8892010a8",
"text": "Turkey has been undertaking many projects to integrate Information and Communication Technology (ICT) sources into practice in the teaching-learning process in educational institutions. This research study sheds light on the use of ICT tools in primary schools in the social studies subject area, by considering various variables which affect the success of the implementation of the use of these tools. A survey was completed by 326 teachers who teach fourth and fifth grade at primary level. The results showed that although teachers are willing to use ICT resources and are aware of the existing potential, they are facing problems in relation to accessibility to ICT resources and lack of in-service training opportunities.",
"title": ""
},
{
"docid": "0f2caa9b91c2c180cbfbfcc25941f78e",
"text": "BACKGROUND\nSevere mitral annular calcification causing degenerative mitral stenosis (DMS) is increasingly encountered in patients undergoing mitral and aortic valve interventions. However, its clinical profile and natural history and the factors affecting survival remain poorly characterized. The goal of this study was to characterize the factors affecting survival in patients with DMS.\n\n\nMETHODS\nAn institutional echocardiographic database was searched for patients with DMS, defined as severe mitral annular calcification without commissural fusion and a mean transmitral diastolic gradient of ≥2 mm Hg. This resulted in a cohort of 1,004 patients. Survival was analyzed as a function of clinical, pharmacologic, and echocardiographic variables.\n\n\nRESULTS\nThe patient characteristics were as follows: mean age, 73 ± 14 years; 73% women; coronary artery disease in 49%; and diabetes mellitus in 50%. The 1- and 5-year survival rates were 78% and 47%, respectively, and were slightly worse with higher DMS grades (P = .02). Risk factors for higher mortality included greater age (P < .0001), atrial fibrillation (P = .0009), renal insufficiency (P = .004), mitral regurgitation (P < .0001), tricuspid regurgitation (P < .0001), elevated right atrial pressure (P < .0001), concomitant aortic stenosis (P = .02), and low serum albumin level (P < .0001). Adjusted for propensity scores, use of renin-angiotensin system blockers (P = .02) or statins (P = .04) was associated with better survival, and use of digoxin was associated with higher mortality (P = .007).\n\n\nCONCLUSIONS\nPrognosis in patients with DMS is poor, being worse in the aged and those with renal insufficiency, atrial fibrillation, and other concomitant valvular lesions. Renin-angiotensin system blockers and statins may confer a survival benefit, and digoxin use may be associated with higher mortality in these patients.",
"title": ""
},
{
"docid": "073ea28d4922c2d9c1ef7945ce4aa9e2",
"text": "The three major solutions for increasing the nominal performance of a CPU are: multiplying the number of cores per socket, expanding the embedded cache memories and use multi-threading to reduce the impact of the deep memory hierarchy. Systems with tens or hundreds of hardware threads, all sharing a cache coherent UMA or NUMA memory space, are today the de-facto standard. While these solutions can easily provide benefits in a multi-program environment, they require recoding of applications to leverage the available parallelism. Threads must synchronize and exchange data, and the overall performance is heavily in influenced by the overhead added by these mechanisms, especially as developers try to exploit finer grain parallelism to be able to use all available resources.",
"title": ""
},
{
"docid": "3913e29aab9b4447edfd4f34a16c38ed",
"text": "This review compares the biological and physiological function of Sigma receptors [σRs] and their potential therapeutic roles. Sigma receptors are widespread in the central nervous system and across multiple peripheral tissues. σRs consist of sigma receptor one (σ1R) and sigma receptor two (σ2R) and are expressed in numerous regions of the brain. The sigma receptor was originally proposed as a subtype of opioid receptors and was suggested to contribute to the delusions and psychoses induced by benzomorphans such as SKF-10047 and pentazocine. Later studies confirmed that σRs are non-opioid receptors (not an µ opioid receptor) and play a more diverse role in intracellular signaling, apoptosis and metabolic regulation. σ1Rs are intracellular receptors acting as chaperone proteins that modulate Ca2+ signaling through the IP3 receptor. They dynamically translocate inside cells, hence are transmembrane proteins. The σ1R receptor, at the mitochondrial-associated endoplasmic reticulum membrane, is responsible for mitochondrial metabolic regulation and promotes mitochondrial energy depletion and apoptosis. Studies have demonstrated that they play a role as a modulator of ion channels (K+ channels; N-methyl-d-aspartate receptors [NMDAR]; inositol 1,3,5 triphosphate receptors) and regulate lipid transport and metabolism, neuritogenesis, cellular differentiation and myelination in the brain. σ1R modulation of Ca2+ release, modulation of cardiac myocyte contractility and may have links to G-proteins. It has been proposed that σ1Rs are intracellular signal transduction amplifiers. This review of the literature examines the mechanism of action of the σRs, their interaction with neurotransmitters, pharmacology, location and adverse effects mediated through them.",
"title": ""
},
{
"docid": "a33f862d0b7dfde7b9f18aa193db9acf",
"text": "Phytoremediation is an important process in the removal of heavy metals and contaminants from the soil and the environment. Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Phytoremediation in phytoextraction is a major technique. In this process is the use of plants or algae to remove contaminants in the soil, sediment or water in the harvesting of plant biomass. Heavy metal is generally known set of elements with atomic mass (> 5 gcm -3), particularly metals such as exchange of cadmium, lead and mercury. Between different pollutant cadmium (Cd) is the most toxic and plant and animal heavy metals. Mustard (Brassica juncea L.) and Sunflower (Helianthus annuus L.) are the plant for the production of high biomass and rapid growth, and it seems that the appropriate species for phytoextraction because it can compensate for the low accumulation of cadmium with a much higher biomass yield. To use chelators, such as acetic acid, ethylene diaminetetraacetic acid (EDTA), and to increase the solubility of metals in the soil to facilitate easy availability indiscernible and the absorption of the plant from root leg in vascular plants. *Corresponding Author: Awais Shakoor [email protected] Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 10, No. 3, p. 88-98, 2017 http://www.innspub.net J. Bio. Env. Sci. 2017 89 | Shakoor et al. Introduction Phytoremediation consists of Greek and words of \"station\" and Latin remedium plants, which means \"rebalancing\" describes the treatment of environmental problems treatment (biological) through the use of plants that mitigate the environmental problem without digging contaminated materials and disposed of elsewhere. Controlled by the plant interactions with groundwater and organic and inorganic contaminated materials in specific locations to achieve therapeutic targets molecules site application (Landmeyer, 2011). Phytoremediation is the use of green plants to remove contaminants from the environment or render them harmless. The technology that uses plants to\" green space \"of heavy metals in the soil through the roots. While vacuum cleaners and you should be able to withstand and survive high levels of heavy metals in the soil unique plants (Baker, 2000). The main result in increasing the population and more industrialization are caused water and soil contamination that is harmful for environment as well as human health. In the whole world, contamination in the soil by heavy metals has become a very serious issue. So, removal of these heavy metals from the soil is very necessary to protect the soil and human health. Both inorganic and organic contaminants, like petroleum, heavy metals, agricultural waste, pesticide and fertilizers are the main source that deteriorate the soil health (Chirakkara et al., 2016). Heavy metals have great role in biological system, so we can divide into two groups’ essentials and non essential. Those heavy metals which play a vital role in biochemical and physiological function in some living organisms are called essential heavy metals, like zinc (Zn), nickel (Ni) and cupper (Cu) (Cempel and Nikel, 2006). In some living organisms, heavy metals don’t play any role in biochemical as well as physiological functions are called non essential heavy metals, such as mercury (Hg), lead (Pb), arsenic (As), and Cadmium (Cd) (Dabonne et al., 2010). Cadmium (Cd) is consider as a non essential heavy metal that is more toxic at very low concentration as compare to other non essential heavy metals. It is toxic to plant, human and animal health. Cd causes serious diseases in human health through the food chain (Rafiq et al., 2014). So, removal of Cd from the soil is very important problem to overcome these issues (Neilson and Rajakaruna, 2015). Several methods are used to remove the Cd from the soil, such as physical, chemical and physiochemical to increase the soil pH (Liu et al., 2015). The main source of Cd contamination in the soil and environment is automobile emissions, batteries and commercial fertilizers (Liu et al., 2015). Phytoremediation is a promising technique that is used in removing the heavy metals form the soil (Ma et al., 2011). Plants update the heavy metals through the root and change the soil properties which are helpful in increasing the soil fertility (Mench et al., 2009). Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Plants also help prevent wind and rain, groundwater and implementation of pollution off site to other areas. Phytoremediation works best in locations with low to moderate amounts of pollution. Plants absorb harmful chemicals from the soil when the roots take in water and nutrients from contaminated soils, streams and groundwater. Once inside the plant and chemicals can be stored in the roots, stems, or leaves. Change of less harmful chemicals within the plant. Or a change in the gases that are released into the air as a candidate plant Agency (US Environmental Protection, 2001). Phytoremediation is the direct use of living green plants and minutes to stabilize or reduce pollution in soil, sludge, sediment, surface water or groundwater bodies with low concentrations of pollutants a large clean space and shallow depths site offers favorable treatment plant (associated with US Environmental Protection Agency 0.2011) circumstances. Phytoremediation is the use of plants for the treatment of contaminated soil sites and sediments J. Bio. Env. Sci. 2017 90 | Shakoor et al. and water. It is best applied at sites of persistent organic pollution with shallow, nutrient, or metal. Phytoremediation is an emerging technology for contaminated sites is attractive because of its low cost and versatility (Schnoor, 1997). Contaminated soils on the site using the processing plants. Phytoremediation is a plant that excessive accumulation of metals in contaminated soils in growth (National Research Council, 1997). Phytoremediation to facilitate the concentration of pollutants in contaminated soil, water or air is composed, and plants able to contain, degrade or eliminate metals, pesticides, solvents, explosives, crude oil and its derivatives, and other contaminants in the media that contain them. Phytoremediation have several techniques and these techniques depend on different factors, like soil type, contaminant type, soil depth and level of ground water. Special operation situations and specific technology applied at the contaminated site (Hyman and Dupont 2001). Techniques of phytoremediation Different techniques are involved in phytoremediation, such as phytoextraction, phytostabilisation, phytotransformation, phytostimulation, phytovolatilization, and rhizofiltration. Phytoextraction Phytoextraction is also called phytoabsorption or phytoaccumulation, in this technique heavy metals are removed by up taking through root form the water and soil environment, and accumulated into the shoot part (Rafati et al., 2011). Phytostabilisation Phytostabilisation is also known as phytoimmobilization. In this technique different type of plants are used for stabilization the contaminants from the soil environment (Ali et al., 2013). By using this technique, the bioavailability and mobility of the different contaminants are reduced. So, this technique is help to avoiding their movement into food chain as well as into ground water (Erakhrumen, 2007). Nevertheless, Phytostabilisation is the technique by which movement of heavy metals can be stop but its not permanent solution to remove the contamination from the soil. Basically, phytostabilisation is the management approach for inactivating the potential of toxic heavy metals form the soil environment contaminants (Vangronsveld et al., 2009).",
"title": ""
},
{
"docid": "e8ff6978cae740152a918284ebe49fe3",
"text": "Cross-lingual sentiment classification aims to predict the sentiment orientation of a text in a language (named as the target language) with the help of the resources from another language (named as the source language). However, current cross-lingual performance is normally far away from satisfaction due to the huge difference in linguistic expression and social culture. In this paper, we suggest to perform active learning for cross-lingual sentiment classification, where only a small scale of samples are actively selected and manually annotated to achieve reasonable performance in a short time for the target language. The challenge therein is that there are normally much more labeled samples in the source language than those in the target language. This makes the small amount of labeled samples from the target language flooded in the aboundance of labeled samples from the source language, which largely reduces their impact on cross-lingual sentiment classification. To address this issue, we propose a data quality controlling approach in the source language to select high-quality samples from the source language. Specifically, we propose two kinds of data quality measurements, intraand extra-quality measurements, from the certainty and similarity perspectives. Empirical studies verify the appropriateness of our active learning approach to cross-lingual sentiment classification.",
"title": ""
},
{
"docid": "01be341cfcfe218896c795d769c66e69",
"text": "This letter proposes a multi-user uplink channel estimation scheme for mmWave massive MIMO over frequency selective fading (FSF) channels. Specifically, by exploiting the angle-domain structured sparsity of mmWave FSF channels, a distributed compressive sensing-based channel estimation scheme is proposed. Moreover, by using the grid matching pursuit strategy with adaptive measurement matrix, the proposed algorithm can solve the power leakage problem caused by the continuous angles of arrival or departure. Simulation results verify the good performance of the proposed solution.",
"title": ""
},
{
"docid": "045162dbad88cd4d341eed216779bb9b",
"text": "BACKGROUND\nCrocodile oil and its products are used as ointments for burns and scalds in traditional medicines. A new ointment formulation - crocodile oil burn ointment (COBO) was developed to provide more efficient wound healing activity. The purpose of the study was to evaluate the burn healing efficacy of this new formulation by employing deep second-degree burns in a Wistar rat model. The analgesic and anti-inflammatory activities of COBO were also studied to provide some evidences for its further use.\n\n\nMATERIALS AND METHODS\nThe wound healing potential of this formulation was evaluated by employing a deep second-degree burn rat model and the efficiency was comparatively assessed against a reference ointment - (1% wt/wt) silver sulfadiazine (SSD). After 28 days, the animals were euthanized and the wounds were removed for transversal and longitudinal histological studies. Acetic acid-induced writhing in mice was used to evaluate the analgesic activity and its anti-inflammatory activity was observed in xylene -induced edema in mice.\n\n\nRESULTS\nCOBO enhanced the burn wound healing (20.5±1.3 d) as indicated by significant decrease in wound closure time compared with the burn control (25.0±2.16 d) (P<0.01). Hair follicles played an importance role in the physiological functions of the skin, and their growth in the wound could be revealed for the skin regeneration situation. Histological results showed that the hair follicles were well-distributed in the post-burn skin of COBO treatment group, and the amounts of total, active, primary and secondary hair follicles in post-burn 28-day skin of COBO treatment groups were more than those in burn control and SSD groups. On the other hand, the analgesic and anti-inflammatory activity of COBO were much better than those of control group, while they were very close to those of moist exposed burn ointment (MEBO).\n\n\nCONCLUSIONS\nCOBO accelerated wound closure, reduced inflammation, and had analgesic effects compared with SSD in deep second degree rat burn model. These findings suggest that COBO would be a potential therapy for treating human burns. Abbreviations: COBO, crocodile oil burn ointment; SSD, silver sulfadiazine; MEBO, moist exposed burn ointment; TCM, traditional Chinese medicine; CHM, Chinese herbal medicine; GC-MS, gas chromatography-mass spectrometry.",
"title": ""
},
{
"docid": "162bfca981e89b1b3174a030ad8f64c6",
"text": "This paper addresses the consensus problem of multiagent systems with a time-invariant communication topology consisting of general linear node dynamics. A distributed observer-type consensus protocol based on relative output measurements is proposed. A new framework is introduced to address in a unified way the consensus of multiagent systems and the synchronization of complex networks. Under this framework, the consensus of multiagent systems with a communication topology having a spanning tree can be cast into the stability of a set of matrices of the same low dimension. The notion of consensus region is then introduced and analyzed. It is shown that there exists an observer-type protocol solving the consensus problem and meanwhile yielding an unbounded consensus region if and only if each agent is both stabilizable and detectable. A multistep consensus protocol design procedure is further presented. The consensus with respect to a time-varying state and the robustness of the consensus protocol to external disturbances are finally discussed. The effectiveness of the theoretical results is demonstrated through numerical simulations, with an application to low-Earth-orbit satellite formation flying.",
"title": ""
},
{
"docid": "0f5ad4bd916a0115215adc938d46bf2c",
"text": "We propose a new paradigm to effortlessly get a portable geometric Level Of Details (LOD) for a point cloud inside a Point Cloud Server. The point cloud is divided into groups of points (patch), then each patch is reordered (MidOc ordering) so that reading points following this order provides more and more details on the patch. This LOD have then multiple applications: point cloud size reduction for visualisation (point cloud streaming) or speeding of slow algorithm, fast density peak detection and correction as well as safeguard for methods that may be sensible to density variations. The LOD method also embeds information about the sensed object geometric nature, and thus can be used as a crude multi-scale dimensionality descriptor, enabling fast classification and on-the-fly filtering for basic classes.",
"title": ""
},
{
"docid": "dedef832d8b54cac137277afe9cd27eb",
"text": "The number of strands to minimize loss in a litz-wire transformer winding is determined. With fine stranding, the ac resistance factor decreases, but dc resistance increases because insulation occupies more of the window area. A power law to model insulation thickness is combined with standard analysis of proximity-effect losses.",
"title": ""
},
{
"docid": "228cd0696e0da6f18a22aa72f009f520",
"text": "Modern Convolutional Neural Networks (CNN) are extremely powerful on a range of computer vision tasks. However, their performance may degrade when the data is characterised by large intra-class variability caused by spatial transformations. The Spatial Transformer Network (STN) is currently the method of choice for providing CNNs the ability to remove those transformations and improve performance in an end-to-end learning framework. In this paper, we propose Densely Fused Spatial Transformer Network (DeSTNet), which, to our best knowledge, is the first dense fusion pattern for combining multiple STNs. Specifically, we show how changing the connectivity pattern of multiple STNs from sequential to dense leads to more powerful alignment modules. Extensive experiments on three benchmarks namely, MNIST, GTSRB, and IDocDB show that the proposed technique outperforms related state-of-the-art methods (i.e., STNs and CSTNs) both in terms of accuracy and robustness.",
"title": ""
},
{
"docid": "f3090b5de9f3f1c29f261a2ef86bac61",
"text": "The K-means algorithm is a popular data-clustering algorithm. However, one of its drawbacks is the requirement for the number of clusters, K, to be specified before the algorithm is applied. This paper first reviews existing methods for selecting the number of clusters for the algorithm. Factors that affect this selection are then discussed and a new measure to assist the selection is proposed. The paper concludes with an analysis of the results of using the proposed measure to determine the number of clusters for the K-means algorithm for different data sets.",
"title": ""
},
{
"docid": "e870f2fe9a26b241bdeca882b6186169",
"text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd recommender systems handbook as the choice of reading, you can find here.",
"title": ""
},
{
"docid": "a6f1480f52d142a013bb88a92e47b0d7",
"text": "An isolated switched high step up boost DC-DC converter is discussed in this paper. The main objective of this paper is to step up low voltage to very high voltage. This paper mainly initiates at boosting a 30V DC into 240V DC. The discussed converter benefits from the continuous input current. Usually, step-up DC-DC converters are suitable for input whose voltage level is very low. The circuital design comprises of four main stages. Firstly, an impedance network which is used to boost the low input voltage. Secondly a switching network which is used to boost the input voltage then an isolation transformer which is used to provide higher boosting ability and finally a voltage multiplier rectifier which is used to rectify the secondary voltage of the transformer. No switching deadtime is required, which increases the reliability of the converter. Comparing with the existing step-up topologies indicates that this new design is hybrid, portable, higher power density and the size of the whole system is also reduced. The principles as well as operations were analysed and experimentally worked out, which provides a higher efficiency. KeywordImpedance Network, Switching Network, Isolation Transformer, Voltage Multiplier Rectifier, MicroController, DC-DC Boost Converter __________________________________________________________________________________________________",
"title": ""
},
{
"docid": "1c7251c55cf0daea9891c8a522bbd3ec",
"text": "The role of computers in the modern office has divided ouractivities between virtual interactions in the realm of thecomputer and physical interactions with real objects within thetraditional office infrastructure. This paper extends previous workthat has attempted to bridge this gap, to connect physical objectswith virtual representations or computational functionality, viavarious types of tags. We discuss a variety of scenarios we haveimplemented using a novel combination of inexpensive, unobtrusiveand easy to use RFID tags, tag readers, portable computers andwireless networking. This novel combination demonstrates theutility of invisibly, seamlessly and portably linking physicalobjects to networked electronic services and actions that arenaturally associated with their form.",
"title": ""
},
{
"docid": "1ad6efaaf4e3201d59c62cd3dbcc01a6",
"text": "•Combine Bayesian change point detection with Gaussian Processes to define a nonstationary time series model. •Central aim is to react to underlying regime changes in an online manner. •Able to integrate out all latent variables and optimize hyperparameters sequentially. •Explore three alternative ways of augmenting GP models to handle nonstationarity (GPTS, ARGPCP and NSGP – see below). •A Bayesian approach (BOCPD) for online change point detection was introduced in [1]. •BOCPD introduces a latent variable representing the run length at time t and adapts predictions via integrating out the run length. •BOCPD has two key ingredients: –Any model which can construct a predictive density for future observations, in particular, p(xt|x(t−τ ):(t−1), θm), i.e., the “underlying predictive model” (UPM). –A hazard function H(r|θh) which encodes our prior belief in a change point occuring after observing a run length r.",
"title": ""
},
{
"docid": "cc05dca89bf1e3f53cf7995e547ac238",
"text": "Ensembles of randomized decision trees, known as Random Forests, have become a valuable machine learning tool for addressing many computer vision problems. Despite their popularity, few works have tried to exploit contextual and structural information in random forests in order to improve their performance. In this paper, we propose a simple and effective way to integrate contextual information in random forests, which is typically reflected in the structured output space of complex problems like semantic image labelling. Our paper has several contributions: We show how random forests can be augmented with structured label information and be used to deliver structured low-level predictions. The learning task is carried out by employing a novel split function evaluation criterion that exploits the joint distribution observed in the structured label space. This allows the forest to learn typical label transitions between object classes and avoid locally implausible label configurations. We provide two approaches for integrating the structured output predictions obtained at a local level from the forest into a concise, global, semantic labelling. We integrate our new ideas also in the Hough-forest framework with the view of exploiting contextual information at the classification level to improve the performance on the task of object detection. Finally, we provide experimental evidence for the effectiveness of our approach on different tasks: Semantic image labelling on the challenging MSRCv2 and CamVid databases, reconstruction of occluded handwritten Chinese characters on the Kaist database and pedestrian detection on the TU Darmstadt databases.",
"title": ""
}
] |
scidocsrr
|
a49cf65c974cb7f2f68ec71aa194eaf1
|
Chemical named entities recognition: a review on approaches and applications
|
[
{
"docid": "ccb5a426e9636186d2819f34b5f0d5e8",
"text": "MOTIVATION\nThe discovery of regulatory pathways, signal cascades, metabolic processes or disease models requires knowledge on individual relations like e.g. physical or regulatory interactions between genes and proteins. Most interactions mentioned in the free text of biomedical publications are not yet contained in structured databases.\n\n\nRESULTS\nWe developed RelEx, an approach for relation extraction from free text. It is based on natural language preprocessing producing dependency parse trees and applying a small number of simple rules to these trees. We applied RelEx on a comprehensive set of one million MEDLINE abstracts dealing with gene and protein relations and extracted approximately 150,000 relations with an estimated performance of both 80% precision and 80% recall.\n\n\nAVAILABILITY\nThe used natural language preprocessing tools are free for use for academic research. Test sets and relation term lists are available from our website (http://www.bio.ifi.lmu.de/publications/RelEx/).",
"title": ""
}
] |
[
{
"docid": "709707d1ca7155380743335a288aabe4",
"text": "Following the onset of maturation, female athletes have a significantly higher risk for anterior cruciate ligament (ACL) injury compared with male athletes. While multiple sex differences in lower-extremity neuromuscular control and biomechanics have been identified as potential risk factors for ACL injury in females, the majority of these studies have focused specifically on the knee joint. However, increasing evidence in the literature indicates that lumbo-pelvic (core) control may have a large effect on knee-joint control and injury risk. This review examines the published evidence on the contributions of the trunk and hip to knee-joint control. Specifically, the sex differences in potential proximal controllers of the knee as risk factors for ACL injury are identified and discussed. Sex differences in trunk and hip biomechanics have been identified in all planes of motion (sagittal, coronal and transverse). Essentially, female athletes show greater lateral trunk displacement, altered trunk and hip flexion angles, greater ranges of trunk motion, and increased hip adduction and internal rotation during sport manoeuvres, compared with their male counterparts. These differences may increase the risk of ACL injury among female athletes. Prevention programmes targeted towards trunk and hip neuromuscular control may decrease the risk for ACL injuries.",
"title": ""
},
{
"docid": "381a11fe3d56d5850ec69e2e9427e03f",
"text": "We present an approximation algorithm that takes a pool of pre-trained models as input and produces from it a cascaded model with similar accuracy but lower average-case cost. Applied to state-of-the-art ImageNet classification models, this yields up to a 2x reduction in floating point multiplications, and up to a 6x reduction in average-case memory I/O. The auto-generated cascades exhibit intuitive properties, such as using lower-resolution input for easier images and requiring higher prediction confidence when using a computationally cheaper model.",
"title": ""
},
{
"docid": "279c377e12cdb8aec7242e0e9da2dd26",
"text": "It is well accepted that pain is a multidimensional experience, but little is known of how the brain represents these dimensions. We used positron emission tomography (PET) to indirectly measure pain-evoked cerebral activity before and after hypnotic suggestions were given to modulate the perceived intensity of a painful stimulus. These techniques were similar to those of a previous study in which we gave suggestions to modulate the perceived unpleasantness of a noxious stimulus. Ten volunteers were scanned while tonic warm and noxious heat stimuli were presented to the hand during four experimental conditions: alert control, hypnosis control, hypnotic suggestions for increased-pain intensity and hypnotic suggestions for decreased-pain intensity. As shown in previous brain imaging studies, noxious thermal stimuli presented during the alert and hypnosis-control conditions reliably activated contralateral structures, including primary somatosensory cortex (S1), secondary somatosensory cortex (S2), anterior cingulate cortex, and insular cortex. Hypnotic modulation of the intensity of the pain sensation led to significant changes in pain-evoked activity within S1 in contrast to our previous study in which specific modulation of pain unpleasantness (affect), independent of pain intensity, produced specific changes within the ACC. This double dissociation of cortical modulation indicates a relative specialization of the sensory and the classical limbic cortical areas in the processing of the sensory and affective dimensions of pain.",
"title": ""
},
{
"docid": "48f8c9b99afa5e42592cb9106198e803",
"text": "The recent explosion of interest in the bioactivity of the flavonoids of higher plants is due, at least in part, to the potential health benefits of these polyphenolic components of major dietary constituents. This review article discusses the biological properties of the flavonoids and focuses on the relationship between their antioxidant activity, as hydrogen donating free radical scavengers, and their chemical structures. This culminates in a proposed hierarchy of antioxidant activity in the aqueous phase. The cumulative findings concerning structure-antioxidant activity relationships in the lipophilic phase derive from studies on fatty acids, liposomes, and low-density lipoproteins; the factors underlying the influence of the different classes of polyphenols in enhancing their resistance to oxidation are discussed and support the contention that the partition coefficients of the flavonoids as well as their rates of reaction with the relevant radicals define the antioxidant activities in the lipophilic phase.",
"title": ""
},
{
"docid": "289849c6cb55ed61d28c8fe5132fedde",
"text": "An adaptive multilevel wavelet collocation method for solving multi-dimensional elliptic problems with localized structures is described. The method is based on multi-dimensional second generation wavelets, and is an extension of the dynamically adaptive second generation wavelet collocation method for evolution problems [Int. J. Comp. Fluid Dyn. 17 (2003) 151]. Wavelet decomposition is used for grid adaptation and interpolation, while a hierarchical finite difference scheme, which takes advantage of wavelet multilevel decomposition, is used for derivative calculations. The multilevel structure of the wavelet approximation provides a natural way to obtain the solution on a near optimal grid. In order to accelerate the convergence of the solver, an iterative procedure analogous to the multigrid algorithm is developed. The overall computational complexity of the solver is O(N ), where N is the number of adapted grid points. The accuracy and computational efficiency of the method are demonstrated for the solution of twoand three-dimensional elliptic test problems.",
"title": ""
},
{
"docid": "d118a5d9904a88ffd84a7f7c08970343",
"text": "We present FingOrbits, a wearable interaction technique using synchronized thumb movements. A thumb-mounted ring with an inertial measurement unit and a contact microphone are used to capture thumb movements when rubbing against the other fingers. Spectral information of the movements are extracted and fed into a classification backend that facilitates gesture discrimination. FingOrbits enables up to 12 different gestures through detecting three rates of movement against each of the four fingers. Through a user study with 10 participants (7 novices, 3 experts), we demonstrate that FingOrbits can distinguish up to 12 thumb gestures with an accuracy of 89% to 99% rendering the approach applicable for practical applications.",
"title": ""
},
{
"docid": "024e9600707203ffcf35ca96dff42a87",
"text": "The blockchain technology is gaining momentum because of its possible application to other systems than the cryptocurrency one. Indeed, blockchain, as a de-centralized system based on a distributed digital ledger, can be utilized to securely manage any kind of assets, constructing a system that is independent of any authorization entity. In this paper, we briefly present blockchain and our work in progress, the VMOA blockchain, to secure virtual machine orchestration operations for cloud computing and network functions virtualization systems. Using tutorial examples, we describe our design choices and draw implementation plans.",
"title": ""
},
{
"docid": "a77934e089a4577a9750fab030a12358",
"text": "A growing public awareness of the potential negative impacts of corporate activities on the natural environment and society compels large companies to invest increasing resources in the communication of their responsible conduct. This paper employs Appraisal theory in a comparative analysis of BP’s and IKEA’s 2009 social reports, each company’s record of their nonfinancial performance. The main objective is to explore how, through Appraisal resources, BP and IKEA construct their corporate identity and relationship with their stakeholders. The analysis reveals two markedly different approaches to the construction of a responsible corporate identity. While BP deploys interpersonal resources to portray itself as a trustworthy and authoritative expert, IKEA discloses itself as a sensitive and caring corporation, engaged in a continual effort to improve. These differences are interpreted in light of the legitimisation challenges the two companies face. Keywords: discourse analysis, Appraisal, evaluation, stance, corpus-based approaches to evaluation, corporate social reports, corporate identity, legitimacy theory, corporate social responsibility This\t\r is\t\r a\t\r pre-‐print\t\r version\t\r of\t\r an\t\r article\t\r that\t\r has\t\r been\t\r published\t\r in\t\r the journal\t\r Discourse\t\r &\t\r Communication,\t\r volume\t\r 6,\t\r issue\t\r 1.\t\r It\t\r can\t\r be\t\r cited as:\t\r Fuoli,\t\r M.\t\r 2012.\t\r ‘Assessing\t\r social\t\r responsibility:\t\r A\t\r quantitative analysis\t\r of\t\r Appraisal\t\r in\t\r BP’s\t\r and\t\r IKEA’s\t\r social\t\r reports’,\t\r Discourse\t\r and Communication\t\r 6\t\r (1),\t\r pp.\t\r 55–81.\t\r For\t\r direct\t\r quotations\t\r and\t\r page numbers,\t\r please\t\r check\t\r the\t\r published\t\r version,\t\r which\t\r can\t\r be downloaded\t\r from\t\r here: http://dcm.sagepub.com/content/6/1/55.abstract",
"title": ""
},
{
"docid": "4510492476ae812905d22b567cfe1716",
"text": "Different language markers can be used to reveal the differences between structures of truthful and deceptive (fake) news. Two experiments are held: the first one is based on lexics level markers, the second one on discourse level is based on rhetorical relations categories (frequencies). Corpus consists of 174 truthful and deceptive news stories in Russian. Support Vector Machines and Random Forest Classifier were used for text classification. The best results for lexical markers we got by using Support Vector Ma-chines with rbf kernel (f-measure 0.65). The model could be developed and be used as a preliminary filter for fake news detection.",
"title": ""
},
{
"docid": "8ba2b376995e3a6a02720a73012d590b",
"text": "This paper focuses on reducing the power consumption of wireless microsensor networks. Therefore, a communication protocol named LEACH (Low-Energy Adaptive Clustering Hierarchy) is modified. We extend LEACH’s stochastic clusterhead selection algorithm by a deterministic component. Depending on the network configuration an increase of network lifetime by about 30 % can be accomplished. Furthermore, we present a new approach to define lifetime of microsensor networks using three new metrics FND (First Node Dies), HNA (Half of the Nodes Alive), and LND (Last Node Dies).",
"title": ""
},
{
"docid": "26b592326edeac03578d8b52ce33f2e2",
"text": "This paper proposes a model of information aesthetics in the context of information visualization. It addresses the need to acknowledge a recently emerging number of visualization projects that combine information visualization techniques with principles of creative design. The proposed model contributes to a better understanding of information aesthetics as a potentially independent research field within visualization that specifically focuses on the experience of aesthetics, dataset interpretation and interaction. The proposed model is based on analysing existing visualization techniques by their interpretative intent and data mapping inspiration. It reveals information aesthetics as the conceptual link between information visualization and visualization art, and includes the fields of social and ambient visualization. This model is unique in its focus on aesthetics as the artistic influence on the technical implementation and intended purpose of a visualization technique, rather than subjective aesthetic judgments of the visualization outcome. This research provides a framework for understanding aesthetics in visualization, and allows for new design guidelines and reviewing criteria.",
"title": ""
},
{
"docid": "f87fea9cd76d1545c34f8e813347146e",
"text": "In fault detection and isolation, diagnostic test results are commonly used to compute a set of diagnoses, where each diagnosis points at a set of components which might behave abnormally. In distributed systems consisting of multiple control units, the test results in each unit can be used to compute local diagnoses while all test results in the complete system give the global diagnoses. It is an advantage for both repair and fault-tolerant control to have access to the global diagnoses in each unit since these diagnoses represent all test results in all units. However, when the diagnoses, for example, are to be used to repair a unit, only the components that are used by the unit are of interest. The reason for this is that it is only these components that could have caused the abnormal behavior. However, the global diagnoses might include components from the complete system and therefore often include components that are superfluous for the unit. Motivated by this observation, a new type of diagnosis is proposed, namely, the condensed diagnosis. Each unit has a unique set of condensed diagnoses which represents the global diagnoses. The benefit of the condensed diagnoses is that they only include components used by the unit while still representing the global diagnoses. The proposed method is applied to an automotive vehicle, and the results from the application study show the benefit of using condensed diagnoses compared to global diagnoses.",
"title": ""
},
{
"docid": "20bcf837048350386e091eb33ad130cc",
"text": "We describe a design pattern for writing programs that traverse data structures built from rich mutually-recursive data types. Such programs often have a great deal of \"boilerplate\" code that simply walks the structure, hiding a small amount of \"real\" code that constitutes the reason for the traversal.Our technique allows most of this boilerplate to be written once and for all, or even generated mechanically, leaving the programmer free to concentrate on the important part of the algorithm. These generic programs are much more adaptive when faced with data structure evolution because they contain many fewer lines of type-specific code.Our approach is simple to understand, reasonably efficient, and it handles all the data types found in conventional functional programming languages. It makes essential use of rank-2 polymorphism, an extension found in some implementations of Haskell. Further it relies on a simple type-safe cast operator.",
"title": ""
},
{
"docid": "b277cdfdc836c8f3f1a4594914641381",
"text": "Are linguistic properties and behaviors important to recognize terms? Are statistical measures effective to extract terms? Is it possible to capture a sort of termhood with computation linguistic techniques? Or maybe, terms are too much sensitive to exogenous and pragmatic factors that cannot be confined in computational linguistic? All these questions are still open. This study tries to contribute in the search of an answer, with the belief that it can be found only through a careful experimental analysis of real case studies and a study of their correlation with theoretical insights.",
"title": ""
},
{
"docid": "e0724c87fd4344e01cb9260fdd36856c",
"text": "In this paper we introduce a multi-objective auto-tuning framework comprising compiler and runtime components. Focusing on individual code regions, our compiler uses a novel search technique to compute a set of optimal solutions, which are encoded into a multi-versioned executable. This enables the runtime system to choose specifically tuned code versions when dynamically adjusting to changing circumstances.\n We demonstrate our method by tuning loop tiling in cache-sensitive parallel programs, optimizing for both runtime and efficiency. Our static optimizer finds solutions matching or surpassing those determined by exhaustively sampling the search space on a regular grid, while using less than 4% of the computational effort on average. Additionally, we show that parallelism-aware multi-versioning approaches like our own gain a performance improvement of up to 70% over solutions tuned for only one specific number of threads.",
"title": ""
},
{
"docid": "f10294ed332670587cf9c100f2d75428",
"text": "In ancient times, people exchanged their goods and services to obtain what they needed (such as clothes and tools) from other people. This system of bartering compensated for the lack of currency. People offered goods/services and received in kind other goods/services. Now, despite the existence of multiple currencies and the progress of humanity from the Stone Age to the Byte Age, people still barter but in a different way. Mainly, people use money to pay for the goods they purchase and the services they obtain.",
"title": ""
},
{
"docid": "199079ff97d1a48819f8185c2ef23472",
"text": "Identifying domain-dependent opinion words is a key problem in opinion mining and has been studied by several researchers. However, existing work has been focused on adjectives and to some extent verbs. Limited work has been done on nouns and noun phrases. In our work, we used the feature-based opinion mining model, and we found that in some domains nouns and noun phrases that indicate product features may also imply opinions. In many such cases, these nouns are not subjective but objective. Their involved sentences are also objective sentences and imply positive or negative opinions. Identifying such nouns and noun phrases and their polarities is very challenging but critical for effective opinion mining in these domains. To the best of our knowledge, this problem has not been studied in the literature. This paper proposes a method to deal with the problem. Experimental results based on real-life datasets show promising results.",
"title": ""
},
{
"docid": "c11ac0c3e873e13a411ccfd7e271be7c",
"text": "Recommender systems show increasingly importance with the development of E-commerce, news and multimedia applications. Traditional recommendation algorithms such as collaborative-filtering-based methods and graph-based methods mainly use items’ original attributes and relationships between items and users, ignoring items’ chronological order in browsing sessions. In recent years, RNN-based methods show their superiority when dealing with the sequential data, and some modified RNN models have been proposed. However, these RNN models only use the sequence order of items and neglect items’ browsing time information. It is widely accepted that users tend to spend more time on their interested items, and these interested items are always closely related to users’ current target. Based on the above view, items’ browsing time is an important feature in recommendations. In this paper, we propose a modified RNN-based recommender system called TA4Rec, which can recommend the probable item that may be clicked in the next moment. Our main contribution is to introduce a method to calculate the time-attention factors from browsing items’ duration time and add time-attention factors to the RNN-based model. We conduct experiments on RecSys Challenge 2015 dataset and the result shows that TA4REC model has gained obvious improvement on session-based recommendations than the classic session-based recommender method.",
"title": ""
},
{
"docid": "98ac7e59d28d63db1e572ab160b6aa64",
"text": "We show that the Learning with Errors (LWE) problem is classically at least as hard as standard worst-case lattice problems. Previously this was only known under quantum reductions.\n Our techniques capture the tradeoff between the dimension and the modulus of LWE instances, leading to a much better understanding of the landscape of the problem. The proof is inspired by techniques from several recent cryptographic constructions, most notably fully homomorphic encryption schemes.",
"title": ""
}
] |
scidocsrr
|
50e449de1faa3af65b198a0fb6353cdd
|
Distinct balance of excitation and inhibition in an interareal feedforward and feedback circuit of mouse visual cortex.
|
[
{
"docid": "1f364472fcf7da9bfc18d9bb8a521693",
"text": "The Cre/lox system is widely used in mice to achieve cell-type-specific gene expression. However, a strong and universally responding system to express genes under Cre control is still lacking. We have generated a set of Cre reporter mice with strong, ubiquitous expression of fluorescent proteins of different spectra. The robust native fluorescence of these reporters enables direct visualization of fine dendritic structures and axonal projections of the labeled neurons, which is useful in mapping neuronal circuitry, imaging and tracking specific cell populations in vivo. Using these reporters and a high-throughput in situ hybridization platform, we are systematically profiling Cre-directed gene expression throughout the mouse brain in several Cre-driver lines, including new Cre lines targeting different cell types in the cortex. Our expression data are displayed in a public online database to help researchers assess the utility of various Cre-driver lines for cell-type-specific genetic manipulation.",
"title": ""
}
] |
[
{
"docid": "dce51c1fed063c9d9776fce998209d25",
"text": "While classical kernel-based learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Lankriet et al. (2004) considered conic combinations of kernel matrices for classification, leading to a convex quadratically constrained quadratic program. We show that it can be rewritten as a semi-infinite linear program that can be efficiently solved by recycling the standard SVM implementations. Moreover, we generalize the formulation and our method to a larger class of problems, including regression and one-class classification. Experimental results show that the proposed algorithm works for hundred thousands of examples or hundreds of kernels to be combined, and helps for automatic model selection, improving the interpretability of the learning result. In a second part we discuss general speed up mechanism for SVMs, especially when used with sparse feature maps as appear for string kernels, allowing us to train a string kernel SVM on a 10 million real-world splice dataset from computational biology. We integrated Multiple Kernel Learning in our Machine Learning toolbox SHOGUN for which the source code is publicly available at http://www.fml.tuebingen.mpg.de/raetsch/projects/shogun.",
"title": ""
},
{
"docid": "85da43096d4ef2dcb3f8f9ae9ea2db35",
"text": "We present an approach that combines automatic features learned by convolutional neural networks (CNN) and handcrafted features computed by the bag-of-visual-words (BOVW) model in order to achieve state-of-the-art results in facial expression recognition. To obtain automatic features, we experiment with multiple CNN architectures, pretrained models and training procedures, e.g. Dense-SparseDense. After fusing the two types of features, we employ a local learning framework to predict the class label for each test image. The local learning framework is based on three steps. First, a k-nearest neighbors model is applied for selecting the nearest training samples for an input test image. Second, a one-versus-all Support Vector Machines (SVM) classifier is trained on the selected training samples. Finally, the SVM classifier is used for predicting the class label only for the test image it was trained for. Although we used local learning in combination with handcrafted features in our previous work, to the best of our knowledge, local learning has never been employed in combination with deep features. The experiments on the 2013 Facial Expression Recognition (FER) Challenge data set and the FER+ data set demonstrate that our approach achieves state-ofthe-art results. With a top accuracy of 75.42% on the FER 2013 data set and 87.76% on the FER+ data set, we surpass all competition by more than 2% on both data sets.",
"title": ""
},
{
"docid": "f09f5d7e0f75d4b0fdbd8c40860c4473",
"text": "Purpose – The purpose of this paper is to examine the diffusion of a popular Korean music video on the video-sharing web site YouTube. It applies a webometric approach in the diffusion of innovations framework to study three elements of diffusion in a Web 2.0 environment: users, user-to-user relationship and user-generated comment. Design/methodology/approach – The webometric approach combines profile analyses, social network analyses, semantic and sentiment analyses. Findings – The results show that male users in the US played a dominant role in the early-stage diffusion. The dominant users represented the innovators and early adopters in the evaluation stage of the diffusion, and they engaged in continuous discussions about the cultural origin of the video and expressed criticisms. Overall, the discussion between users varied according to their gender, age, and cultural background. Specifically, male users were more interactive than female users, and users in countries culturally similar to Korea were more likely to express favourable attitudes toward the video. Originality/value – The study provides a webometric approach to examine the Web 2.0-based social system in the early-stage global diffusion of cultural offerings. This approach connects the diffusion of innovations framework to the new context of Web 2.0-based diffusion.",
"title": ""
},
{
"docid": "c57a689627f1af0bf872e4d0c5953a28",
"text": "Image diffusion plays a fundamental role for the task of image denoising. The recently proposed trainable nonlinear reaction diffusion (TNRD) model defines a simple but very effective framework for image denoising. However, as the TNRD model is a local model, whose diffusion behavior is purely controlled by information of local patches, it is prone to create artifacts in the homogenous regions and over-smooth highly textured regions, especially in the case of strong noise levels. Meanwhile, it is widely known that the non-local self-similarity (NSS) prior stands as an effective image prior for image denoising, which has been widely exploited in many non-local methods. In this work, we are highly motivated to embed the NSS prior into the TNRD model to tackle its weaknesses. In order to preserve the expected property that end-to-end training remains available, we exploit the NSS prior by defining a set of non-local filters, and derive our proposed trainable non-local reaction diffusion (TNLRD) model for image denoising. Together with the local filters and influence functions, the non-local filters are learned by employing loss-specific training. The experimental results show that the trained TNLRD model produces visually plausible recovered images with more textures and less artifacts, compared to its local versions. Moreover, the trained TNLRD model can achieve strongly competitive performance to recent state-of-the-art image denoising methods in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM).",
"title": ""
},
{
"docid": "62a8548527371acb657d9552ab41d699",
"text": "This paper proposes a novel dynamic gait of locomotion for hexapedal robots which enables them to crawl forward, backward, and rotate using a single actuator. The gait exploits the compliance difference between the two sides of the tripods, to generate clockwise or counter clockwise rotation by controlling the acceleration of the robot. The direction of turning depends on the configuration of the legs -tripod left of right- and the direction of the acceleration. Alternating acceleration in successive steps allows for continuous rotation in the desired direction. An analysis of the locomotion is presented as a function of the mechanical properties of the robot and the contact with the surface. A numerical simulation was performed for various conditions of locomotion. The results of the simulation and analysis were compared and found to be in excellent match.",
"title": ""
},
{
"docid": "7531be3af1285a4c1c0b752d1ee45f52",
"text": "Given an undirected graph with weight for each vertex, the maximum weight clique problem is to find the clique of the maximum weight. Östergård proposed a fast exact algorithm for solving this problem. We show his algorithm is not efficient for very dense graphs. We propose an exact algorithm for the problem, which is faster than Östergård’s algorithm in case the graph is dense. We show the efficiency of our algorithm with some experimental results.",
"title": ""
},
{
"docid": "9584d194e05359ef5123c6b3d71e1c75",
"text": "A bloom filter is a randomized data structure for performing approximate membership queries. It is being increasingly used in networking applications ranging from security to routing in peer to peer networks. In order to meet a given false positive rate, the amount of memory required by a bloom filter is a function of the number of elements in the set. We consider the problem of minimizing the memory requirements in cases where the number of elements in the set is not known in advance but the distribution or moment information of the number of elements is known. We show how to exploit such information to minimize the expected amount of memory required for the filter. We also show how this approach can significantly reduce memory requirement when bloom filters are constructed for multiple sets in parallel. We show analytically as well as experiments on synthetic and trace data that our approach leads to one to three orders of magnitude reduction in memory compared to a standard bloom filter.",
"title": ""
},
{
"docid": "3dcf6c5e59d4472c0b0e25c96b992f3e",
"text": "This paper presents the design of Ultra Wideband (UWB) microstrip antenna consisting of a circular monopole patch antenna with 3 block stepped (wing). The antenna design is an improvement from previous research and it is simulated using CST Microwave Studio software. This antenna was designed on Rogers 5880 printed circuit board (PCB) with overall size of 26 × 40 × 0.787 mm3 and dielectric substrate, εr = 2.2. The performance of the designed antenna was analyzed in term of bandwidth, gain, return loss, radiation pattern, and verified through actual measurement of the fabricated antenna. 10 dB return loss bandwidth from 3.37 GHz to 10.44 GHz based on 50 ohm characteristic impedance for the transmission line model was obtained.",
"title": ""
},
{
"docid": "501d6ec6163bc8b93fd728412a3e97f3",
"text": "This short paper describes our ongoing research on Greenhouse a zero-positive machine learning system for time-series anomaly detection.",
"title": ""
},
{
"docid": "bea270701da3f8d47b19dc7976000562",
"text": "Spatial information captured from optical remote sensors on board unmanned aerial vehicles (UAVs) has great potential in the automatic surveillance of electrical power infrastructure. For an automatic vision based power line inspection system, detecting power lines from cluttered background an important and challenging task. In this paper, we propose a knowledge-based power line detection method for a vision based UAV surveillance and inspection system. A PCNN filter is developed to remove background noise from the images prior to the Hough transform being employed to detect straight lines. Finally knowledge based line clustering is applied to refine the detection results. The experiment on real image data captured from a UAV platform demonstrates that the proposed approach is effective.",
"title": ""
},
{
"docid": "152c11ef8449d53072bbdb28432641fa",
"text": "Flexible intelligent electronic devices (IEDs) are highly desirable to support free allocation of function to IED by means of software reconfiguration without any change of hardware. The application of generic hardware platforms and component-based software technology seems to be a good solution. Due to the advent of IEC 61850, generic hardware platforms with a standard communication interface can be used to implement different kinds of functions with high flexibility. The remaining challenge is the unified function model that specifies various software components with appropriate granularity and provides a framework to integrate them efficiently. This paper proposes the function-block (FB)-based function model for flexible IEDs. The standard FBs are established by combining the IEC 61850 model and the IEC 61499 model. The design of a simplified distance protection IED using standard FBs is described and investigated. The testing results of the prototype system in MATLAB/Simulink demonstrate the feasibility and flexibility of FB-based IEDs.",
"title": ""
},
{
"docid": "d580021d1e7cfe44e58dbace3d5c7bee",
"text": "We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents' social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans' responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies.",
"title": ""
},
{
"docid": "6c00347ffa60b09692bbae45a0c01fc1",
"text": "OBJECTIVES:Eosinophilic gastritis (EG), defined by histological criteria as marked eosinophilia in the stomach, is rare, and large studies in children are lacking. We sought to describe the clinical, endoscopic, and histopathological features of EG, assess for any concurrent eosinophilia at other sites of the gastrointestinal (GI) tract, and evaluate response to dietary and pharmacological therapies.METHODS:Pathology files at our medical center were searched for histological eosinophilic gastritis (HEG) with ≥70 gastric eosinophils per high-power field in children from 2005 to 2011. Pathology slides were evaluated for concurrent eosinophilia in the esophagus, duodenum, and colon. Medical records were reviewed for demographic characteristics, symptoms, endoscopic findings, comorbidities, and response to therapy.RESULTS:Thirty children with severe gastric eosinophilia were identified, median age 7.5 years, 14 of whom had both eosinophilia limited to the stomach and clinical symptoms, fulfilling the clinicopathological definition of EG. Symptoms and endoscopic features were highly variable. History of atopy and food allergies was common. A total of 22% had protein-losing enteropathy (PLE). Gastric eosinophilia was limited to the fundus in two patients. Many patients had associated eosinophilic esophagitis (EoE, 43%) and 21% had eosinophilic enteritis. Response to dietary restriction therapy was high (82% clinical response and 78% histological response). Six out of sixteen patients had persistent EoE despite resolution of their gastric eosinophilia; two children with persistent HEG post therapy developed de novo concurrent EoE.CONCLUSIONS:HEG in children can be present in the antrum and/or fundus. Symptoms and endoscopic findings vary, highlighting the importance of biopsies for diagnosis. HEG is associated with PLE, and with eosinophilia elsewhere in the GI tract including the esophagus. The disease is highly responsive to dietary restriction therapies in children, implicating an allergic etiology. Associated EoE is more resistant to therapy.",
"title": ""
},
{
"docid": "f018db7f20245180d74e4eb07b99e8d3",
"text": "Particle filters can become quite inefficient when being applied to a high-dimensional state space since a prohibitively large number of samples may be required to approximate the underlying density functions with desired accuracy. In this paper, by proposing an adaptive Rao-Blackwellized particle filter for tracking in surveillance, we show how to exploit the analytical relationship among state variables to improve the efficiency and accuracy of a regular particle filter. Essentially, the distributions of the linear variables are updated analytically using a Kalman filter which is associated with each particle in a particle filtering framework. Experiments and detailed performance analysis using both simulated data and real video sequences reveal that the proposed method results in more accurate tracking than a regular particle filter",
"title": ""
},
{
"docid": "0627ea85ea93b56aef5ef378026bc2fc",
"text": "This paper presents a resonant inductive coupling wireless power transfer (RIC-WPT) system with a class-DE and class-E rectifier along with its analytical design procedure. By using the class-DE inverter as a transmitter and the class-E rectifier as a receiver, the designed WPT system can achieve a high power-conversion efficiency because of the class-E ZVS/ZDS conditions satisfied in both the inverter and the rectifier. In the simulation results, the system achieved 79.0 % overall efficiency at 5 W (50 Ω) output power, coupling coefficient 0.072, and 1 MHz operating frequency. Additionally, the simulation results showed good agreement with the design specifications, which indicates the validity of the design procedure.",
"title": ""
},
{
"docid": "da698cfca4e5bbc80fbbab5e8f30e22c",
"text": "This paper base on the application of the Internet of things in the logistics industry as the breakthrough point, to investigate the identification technology, network structure, middleware technology support and so on, which is used in the Internet of things, also to analyze the bottleneck of technology that the Internet of things could meet. At last, summarize the Internet of things’ application in the logistics industry with the intelligent port architecture.",
"title": ""
},
{
"docid": "bbea93884f1f0189be1061939783a1c0",
"text": "Severe adolescent female stress urinary incontinence (SAFSUI) can be defined as female adolescents between the ages of 12 and 17 years complaining of involuntary loss of urine multiple times each day during normal activities or sneezing or coughing rather than during sporting activities. An updated review of its likely prevalence, etiology, and management is required. The case of a 15-year-old female adolescent presenting with a 7-year history of SUI resistant to antimuscarinic medications and 18 months of intensive physiotherapy prompted this review. Issues of performing physical and urodynamic assessment at this young age were overcome in order to achieve the diagnosis of urodynamic stress incontinence (USI). Failed use of tampons was followed by the insertion of (retropubic) suburethral synthetic tape (SUST) under assisted local anesthetic into tissues deemed softer than the equivalent for an adult female. Whereas occasional urinary incontinence can occur in between 6 % and 45 % nulliparous adolescents, the prevalence of non‐neurogenic SAFSUI is uncertain but more likely rare. Risk factors for the occurrence of more severe AFSUI include obesity, athletic activities or high-impact training, and lung diseases such as cystic fibrosis (CF). This first reported use of a SUST in a patient with SAFSUI proved safe and completely curative. Artificial urinary sphincters, periurethral injectables and pubovaginal slings have been tried previously in equivalent patients. SAFSUI is a relatively rare but physically and emotionally disabling presentation. Multiple conservative options may fail, necessitating surgical management; SUST can prove safe and effective.",
"title": ""
},
{
"docid": "b1c62a59a8ce3dd57ab2c00f7657cfef",
"text": "We developed a new method for estimation of vigilance level by using both EEG and EMG signals recorded during transition from wakefulness to sleep. Previous studies used only EEG signals for estimating the vigilance levels. In this study, it was aimed to estimate vigilance level by using both EEG and EMG signals for increasing the accuracy of the estimation rate. In our work, EEG and EMG signals were obtained from 30 subjects. In data preparation stage, EEG signals were separated to its subbands using wavelet transform for efficient discrimination, and chin EMG was used to verify and eliminate the movement artifacts. The changes in EEG and EMG were diagnosed while transition from wakefulness to sleep by using developed artificial neural network (ANN). Training and testing data sets consist of the subbanded components of EEG and power density of EMG signals were applied to the ANN for training and testing the system which gives three situations for the vigilance level of the subject: awake, drowsy, and sleep. The accuracy of estimation was about 98–99% while the accuracy of the previous study, which uses only EEG, was 95–96%.",
"title": ""
},
{
"docid": "497e7a0ed663b2c125650e05f81feae3",
"text": "In this paper we present a novel computer vision library called UAVision that provides support for different digital cameras technologies, from image acquisition to camera calibration, and all the necessary software for implementing an artificial vision system for the detection of color-coded objects. The algorithms behind the object detection focus on maintaining a low processing time, thus the library is suited for real-world real-time applications. The library also contains a TCP Communications Module, with broad interest in robotic applications where the robots are performing remotely from a basestation or from an user and there is the need to access the images acquired by the robot, both for processing or debug purposes. Practical results from the implementation of the same software pipeline using different cameras as part of different types of vision systems are presented. The vision system software pipeline that we present is designed to cope with application dependent time constraints. The experimental results show that using the UAVision library it is possible to use digital cameras at frame rates up to 50 frames per second when working with images of size up to 1 megapixel. Moreover, we present experimental results to show the effect of the frame rate in the delay between the perception of the world and the action of an autonomous robot, as well as the use of raw data from the camera sensor and the implications of this in terms of the referred delay.",
"title": ""
}
] |
scidocsrr
|
fd302182a0cfdfdb5efdbe8e0d2473c6
|
A Joint Segmentation and Classification Framework for Sentence Level Sentiment Classification
|
[
{
"docid": "6081f8b819133d40522a4698d4212dfc",
"text": "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.",
"title": ""
},
{
"docid": "d5b986cf02b3f9b01e5307467c1faec2",
"text": "Most sentiment analysis approaches use as baseline a support vector machines (SVM) classifier with binary unigram weights. In this paper, we explore whether more sophisticated feature weighting schemes from Information Retrieval can enhance classification accuracy. We show that variants of the classictf.idf scheme adapted to sentiment analysis provide significant increases in accuracy, especially when using a sublinear function for term frequency weights and document frequency smoothing. The techniques are tested on a wide selection of data sets and produce the best accuracy to our knowledge.",
"title": ""
},
{
"docid": "cf3804e332e9bec1120261f9e4f98da8",
"text": "We propose Bilingually-constrained Recursive Auto-encoders (BRAE) to learn semantic phrase embeddings (compact vector representations for phrases), which can distinguish the phrases with different semantic meanings. The BRAE is trained in a way that minimizes the semantic distance of translation equivalents and maximizes the semantic distance of nontranslation pairs simultaneously. After training, the model learns how to embed each phrase semantically in two languages and also learns how to transform semantic embedding space in one language to the other. We evaluate our proposed method on two end-to-end SMT tasks (phrase table pruning and decoding with phrasal semantic similarities) which need to measure semantic similarity between a source phrase and its translation candidates. Extensive experiments show that the BRAE is remarkably effective in these two tasks.",
"title": ""
}
] |
[
{
"docid": "81476f837dd763301ba065ac78c5bb65",
"text": "Background: The ideal lip augmentation technique provides the longest period of efficacy, lowest complication rate, and best aesthetic results. A myriad of techniques have been described for lip augmentation, but the optimal approach has not yet been established. This systematic review with metaregression will focus on the various filling procedures for lip augmentation (FPLA), with the goal of determining the optimal approach. Methods: A systematic search for all English, French, Spanish, German, Italian, Portuguese and Dutch language studies involving FPLA was performed using these databases: Elsevier Science Direct, PubMed, Highwire Press, Springer Standard Collection, SAGE, DOAJ, Sweetswise, Free E-Journals, Ovid Lippincott Williams & Wilkins, Willey Online Library Journals, and Cochrane Plus. The reference section of every study selected through this database search was subsequently examined to identify additional relevant studies. Results: The database search yielded 29 studies. Nine more studies were retrieved from the reference sections of these 29 studies. The level of evidence ratings of these 38 studies were as follows: level Ib, four studies; level IIb, four studies; level IIIb, one study; and level IV, 29 studies. Ten studies were prospective. Conclusions: This systematic review sought to highlight all the quality data currently available regarding FPLA. Because of the considerable diversity of procedures, no definitive comparisons or conclusions were possible. Additional prospective studies and clinical trials are required to more conclusively determine the most appropriate approach for this procedure. Level of evidence: IV. © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fc9eb12afb2c86005ae4f06835feb6cc",
"text": "Peer pressure is a reoccurring phenomenon in criminal or deviant behaviour especially, as it pertains to adolescents. It may begin in early childhood of about 5years and increase through childhood to become more intense in adolescence years. This paper examines how peer pressure is present in adolescents and how it may influence or create the leverage to non-conformity to societal norms and laws. The paper analyses the process and occurrence of peer influence and pressure on individuals and groups within the framework of the social learning and the social control theories. Major features of the peer pressure process are identified as group dynamics, delinquent peer subculture, peer approval of delinquent behaviour and sanctions for non-conformity which include ridicule, mockery, ostracism and even mayhem or assault in some cases. Also, the paper highlights acceptance and rejection as key concepts that determine the sway or gladiation of adolescents to deviant and criminal behaviour. Finally, it concludes that peer pressure exists for conformity and in delinquent subculture, the result is conformity to criminal codes and behaviour. The paper recommends more urgent, serious and offensive grass root approaches by governments and institutions against this growing threat to the continued peace, orderliness and development of society.",
"title": ""
},
{
"docid": "70a9aa97fc51452fb87288c86d0299d6",
"text": "The germline precursor to the ferrochelatase antibody 7G12 was found to bind the polyether jeffamine in addition to its cognate hapten N-methylmesoporphyrin. A comparison of the X-ray crystal structures of the ligand-free germline Fab and its complex with either hapten or jeffamine reveals that the germline antibody undergoes significant conformational changes upon the binding of these two structurally distinct ligands, which lead to increased antibody-ligand complementarity. The five somatic mutations introduced during affinity maturation lead to enhanced binding affinity for hapten and a loss in affinity for jeffamine. Moreover, a comparison of the crystal structures of the germline and affinity-matured antibodies reveals that somatic mutations not only fix the optimal binding site conformation for the hapten, but also introduce interactions that interfere with the binding of non-hapten molecules. The structural plasticity of this germline antibody and the structural effects of the somatic mutations that result in enhanced affinity and specificity for hapten likely represent general mechanisms used by the immune response, and perhaps primitive proteins, to evolve high affinity, selective receptors for so many distinct chemical structures.",
"title": ""
},
{
"docid": "6d589aaae8107bf6b71c0f06f7a49a28",
"text": "1. INTRODUCTION The explosion of digital connectivity, the significant improvements in communication and information technologies and the enforced global competition are revolutionizing the way business is performed and the way organizations compete. A new, complex and rapidly changing economic order has emerged based on disruptive innovation, discontinuities, abrupt and seditious change. In this new landscape, knowledge constitutes the most important factor, while learning, which emerges through cooperation, together with the increased reliability and trust, is the most important process (Lundvall and Johnson, 1994). The competitive survival and ongoing sustenance of an organisation primarily depend on its ability to redefine and adopt continuously goals, purposes and its way of doing things (Malhotra, 2001). These trends suggest that private and public organizations have to reinvent themselves through 'continuous non-linear innovation' in order to sustain themselves and achieve strategic competitive advantage. The extant literature highlights the great potential of ICT tools for operational efficiency, cost reduction, quality of services, convenience, innovation and learning in private and public sectors. However, scholarly investigations have focused primarily on the effects and outcomes of ICTs (Information & Communication Technology) for the private sector. The public sector has been sidelined because it tends to lag behind in the process of technology adoption and business reinvention. Only recently has the public sector come to recognize the potential importance of ICT and e-business models as a means of improving the quality and responsiveness of the services they provide to their citizens, expanding the reach and accessibility of their services and public infrastructure and allowing citizens to experience a faster and more transparent form of access to government services. The initiatives of government agencies and departments to use ICT tools and applications, Internet and mobile devices to support good governance, strengthen existing relationships and build new partnerships within civil society, are known as eGovernment initiatives. As with e-commerce, eGovernment represents the introduction of a great wave of technological innovation as well as government reinvention. It represents a tremendous impetus to move forward in the 21 st century with higher quality, cost effective government services and a better relationship between citizens and government (Fang, 2002). Many government agencies in developed countries have taken progressive steps toward the web and ICT use, adding coherence to all local activities on the Internet, widening local access and skills, opening up interactive services for local debates, and increasing the participation of citizens on promotion and management …",
"title": ""
},
{
"docid": "409baee7edaec587727624192eab93aa",
"text": "It has been widely shown that recognition memory includes two distinct retrieval processes: familiarity and recollection. Many studies have shown that recognition memory can be facilitated when there is a perceptual match between the studied and the tested items. Most event-related potential studies have explored the perceptual match effect on familiarity on the basis of the hypothesis that the specific event-related potential component associated with familiarity is the FN400 (300-500 ms mid-frontal effect). However, it is currently unclear whether the FN400 indexes familiarity or conceptual implicit memory. In addition, on the basis of the findings of a previous study, the so-called perceptual manipulations in previous studies may also involve some conceptual alterations. Therefore, we sought to determine the influence of perceptual manipulation by color changes on recognition memory when the perceptual or the conceptual processes were emphasized. Specifically, different instructions (perceptually or conceptually oriented) were provided to the participants. The results showed that color changes may significantly affect overall recognition memory behaviorally and that congruent items were recognized with a higher accuracy rate than incongruent items in both tasks, but no corresponding neural changes were found. Despite the evident familiarity shown in the two tasks (the behavioral performance of recognition memory was much higher than at the chance level), the FN400 effect was found in conceptually oriented tasks, but not perceptually oriented tasks. It is thus highly interesting that the FN400 effect was not induced, although color manipulation of recognition memory was behaviorally shown, as seen in previous studies. Our findings of the FN400 effect for the conceptual but not perceptual condition support the explanation that the FN400 effect indexes conceptual implicit memory.",
"title": ""
},
{
"docid": "1b6ddffacc50ad0f7e07675cfe12c282",
"text": "Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.",
"title": ""
},
{
"docid": "eced59d8ec159f3127e7d2aeca76da96",
"text": "Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face to face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches, such as handheld devices with composited graphics or see-through head worn displays, is that users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction. We detail our prototype system and a number of interactive experiences. We present an initial user experiment that shows that participants are able to deduce the size and distance of a virtual projected object. A second experiment shows that participants are able to infer which of a number of targets the other user indicates by pointing.",
"title": ""
},
{
"docid": "dae63c2eb42acf7c5aa75948169abbbf",
"text": "This paper introduces a local planner which computes a set of commands, allowing an autonomous vehicle to follow a given trajectory. To do so, the platform relies on a localization system, a map and a cost map which represents the obstacles in the environment. The presented method computes a set of tentative trajectories, using a schema based on a Frenet frame obtained from the global planner. These trajectories are then scored using a linear combination of weighted cost functions. In the presented approach, new weights are introduced in order to satisfy the specificities of our autonomous platform, Verdino. A study on the influence of the defined weights in the final behavior of the vehicle is introduced. From these tests, several configurations have been chosen and ranked according to two different proposed behaviors. The method has been tested both in simulation and in real conditions.",
"title": ""
},
{
"docid": "13f24b04e37c9e965d85d92e2c588c9a",
"text": "In this paper we propose a new user purchase preference model based on their implicit feedback behavior. We analyze user behavior data to seek their purchase preference signals. We find that if a user has more purchase preference on a certain item he would tend to browse it for more times. It gives us an important inspiration that, not only purchasing behavior but also other types of implicit feedback like browsing behavior, can indicate user purchase preference. We further find that user purchase preference signals also exist in the browsing behavior of item categories. Therefore, when we want to predict user purchase preference for certain items, we can integrate these behavior types into our user preference model by converting such preference signals into numerical values. We evaluate our model on a real-world dataset from a shopping site in China. Results further validate that user purchase preference model in our paper can capture more and accurate user purchase preference information from implicit feedback and greatly improves the performance of user purchase prediction.",
"title": ""
},
{
"docid": "2b42cf158d38153463514ed7bc00e25f",
"text": "The Disney Corporation made their first princess film in 1937 and has continued producing these movies. Over the years, Disney has received criticism for their gender interpretations and lack of racial diversity. This study will examine princess films from the 1990’s and 2000’s and decide whether race or time has an effect on the gender role portrayal of each character. By using a content analysis, this study identified the changes with each princess. The findings do suggest the princess characters exhibited more egalitarian behaviors over time. 1 The Disney Princess franchise began in 1937 with Snow White and the Seven Dwarfs and continues with the most recent film was Tangled (Rapunzel) in 2011. In past years, Disney film makers were criticized by the public audience for lack of ethnic diversity. In 1995, Disney introduced Pocahontas and three years later Mulan emerged creating racial diversity to the collection. Eleven years later, Disney released The Princess and the Frog (2009). The ongoing question is whether diverse princesses maintain the same qualities as their European counterparts. Walt Disney’s legacy lives on, but viewers are still curious about the all white princess collection which did not gain racial counterparts until 58 years later. It is important to recognize the role the Disney Corporation plays in today’s society. The company has several princesses’ films with matching merchandise. Parents purchase the items for their children and through film and merchandise, children are receiving messages such as how a woman ought to act, think or dress. Gender construction in Disney princess films remains important because of the messages it sends to children. We need to know whether gender roles presented in the films downplay the intellect of a woman in a modern society or whether Disney princesses are constricted to the female gender roles such as submissiveness and nurturing. In addition, we need to consider whether the messages are different for diverse princesses. The purpose of the study is to investigate the changes in gender construction in Disney princess characters related to the race of the character. This research also examines how gender construction of Disney princess characters changed from the 1900’s to 2000’s. A comparative content analysis will analyze gender role differences between women of color and white princesses. In particular, the study will ask whether race does matter in the gender roles revealed among each female character. By using social construction perspectives, Disney princesses of color were more masculine, but the most recent films became more egalitarian. 2 LITERATURE REVIEW Women in Disney film Davis (2006) examined women in Disney animated films by creating three categories: The Classic Years, The Middle Era, and The Eisner Era. The Classic Years, 19371967 were described as the beginning of Disney. During this period, women were rarely featured alone in films, but held central roles in the mid-1930s (Davis 2006:84). Three princess films were released and the characters carried out traditional feminine roles such as domestic work and passivity. Davis (2006) argued the princesses during The Classic Era were the least active and dynamic. The Middle Era, 1967-1988, led to a downward spiral for the company after the deaths of Walt and Roy Disney. The company faced increased amounts of debt and only eight Disney films were produced. The representation of women remained largely static (Davis 2006:137). The Eisner Era, 1989-2005, represented a revitalization of Disney with the release of 12 films with leading female roles. Based on the eras, Davis argued there was a shift after Walt Disney’s death which allowed more women in leading roles and released them from traditional gender roles. Independence was a new theme in this era allowing women to be selfsufficient unlike women in The Classic Era who relied on male heroines. Gender Role Portrayal in films England, Descartes, and Meek (2011) examined the Disney princess films and challenged the ideal of traditional gender roles among the prince and princess characters. The study consisted of all nine princess films divided into three categories based on their debut: early, middle and most current. The researchers tested three hypotheses: 1) gender roles among males and female characters would differ, 2) males would rescue or attempt to rescue the princess, and 3) characters would display more egalitarian behaviors over time (England, et al. 2011:557-58). The researchers coded traits as masculine and feminine. They concluded that princesses 3 displayed a mixture of masculine and feminine characteristics. These behaviors implied women are androgynous beings. For example, princesses portrayed bravery almost twice as much as princes (England, et al. 2011). The findings also showed males rescued women more and that women were rarely shown as rescuers. Overall, the data indicated Disney princess films had changed over time as women exhibited more masculine behaviors in more recent films. Choueiti, Granados, Pieper, and Smith (2010) conducted a content analysis regarding gender roles in top grossing Grated films. The researchers considered the following questions: 1) What is the male to female ratio? 2) Is gender related to the presentation of the character demographics such as role, type, or age? and 3) Is gender related to the presentation of character’s likeability, and the equal distribution of male and females from 1990-2005(Choueiti et al. 2010:776-77). The researchers concluded that there were more male characters suggesting the films were patriarchal. However, there was no correlation with demographics of the character and males being viewed as more likeable. Lastly, female representation has slightly decreased from 214 characters or 30.1% in 1990-94 to 281 characters or 29.4% in 2000-2004 (Choueiti et al. 2010:783). From examining gender role portrayals, females have become androgynous while maintaining minimal roles in animated film.",
"title": ""
},
{
"docid": "2fbc75f848a0a3ae8228b5c6cbe76ec4",
"text": "The authors summarize 35 years of empirical research on goal-setting theory. They describe the core findings of the theory, the mechanisms by which goals operate, moderators of goal effects, the relation of goals and satisfaction, and the role of goals as mediators of incentives. The external validity and practical significance of goal-setting theory are explained, and new directions in goal-setting research are discussed. The relationships of goal setting to other theories are described as are the theory's limitations.",
"title": ""
},
{
"docid": "9780c2d63739b8bf4f5c48f12014f605",
"text": "It has been hypothesized that unexplained infertility may be related to specific personality and coping styles. We studied two groups of women with explained infertility (EIF, n = 63) and unexplained infertility (UIF, n = 42) undergoing an in vitro fertilization (IVF) cycle. Women completed personality and coping style questionnaires prior to the onset of the cycle, and state depression and anxiety scales before and at two additional time points during the cycle. Almost no in-between group differences were found at any of the measured time points in regards to the Minnesota Multiphasic Personality Inventory-2 validity and clinical scales, Illness Cognitions and Life Orientation Test, or for the situational measures. The few differences found suggest a more adaptive, better coping, and functioning defensive system in women with EIF. In conclusion, we did not find any clinically significant personality differences or differences in depression or anxiety levels between women with EIF and UIF during an IVF cycle. Minor differences found are probably a reaction to the ambiguous medical situation with its uncertain prognosis, amplifying certain traits which are not specific to one psychological structure but rather to the common experience shared by the group. The results of this study do not support the possibility that personality traits are involved in the pathophysiology of unexplained infertility.",
"title": ""
},
{
"docid": "c25d877f23f874a5ced7548998ec8157",
"text": "The paper presents a Neural Network model for modeling academic profile of students. The proposed model allows prediction of students’ academic performance based on some of their qualitative observations. Classifying and predicting students’ academic performance using arithmetical and statistical techniques may not necessarily offer the best way to evaluate human acquisition of knowledge and skills, but a hybridized fuzzy neural network model successfully handles reasoning with imprecise information, and enables representation of student modeling in the linguistic form the same way the human teachers do. The model is designed, developed and tested in MATLAB and JAVA which considers factors like age, gender, education, past performance, work status, study environment etc. for performance prediction of students. A Fuzzy Probabilistic Neural Network model has been proposed which enables the design of an easy-to-use, personalized student performance prediction component. The results of experiments show that the model outperforms traditional back-propagation neural networks as well as statistical models. It is also found to be a useful tool in predicting the performance of students belonging to any stream. The model may provide dual advantage to the educational institutions; first by helping teachers to amend their teaching methodology based on the level of students thereby improving students’ performances and secondly classifying the likely successful and unsuccessful students.",
"title": ""
},
{
"docid": "02750b69e72daf7f82cb57e1f7f228bf",
"text": "An advanced, simple to use, detrending method to be used before heart rate variability analysis (HRV) is presented. The method is based on smoothness priors approach and operates like a time-varying finite-impulse response high-pass filter. The effect of the detrending on time- and frequency-domain analysis of HRV is studied.",
"title": ""
},
{
"docid": "93a283324fed31e4ecf81d62acae583a",
"text": "The success of the state-of-the-art deblurring methods mainly depends on the restoration of sharp edges in a coarse-to-fine kernel estimation process. In this paper, we propose to learn a deep convolutional neural network for extracting sharp edges from blurred images. Motivated by the success of the existing filtering-based deblurring methods, the proposed model consists of two stages: suppressing extraneous details and enhancing sharp edges. We show that the two-stage model simplifies the learning process and effectively restores sharp edges. Facilitated by the learned sharp edges, the proposed deblurring algorithm does not require any coarse-to-fine strategy or edge selection, thereby significantly simplifying kernel estimation and reducing computation load. Extensive experimental results on challenging blurry images demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of visual quality and run-time.",
"title": ""
},
{
"docid": "c688d24fd8362a16a19f830260386775",
"text": "We present a fast iterative algorithm for identifying the Support Vectors of a given set of points. Our algorithm works by maintaining a candidate Support Vector set. It uses a greedy approach to pick points for inclusion in the candidate set. When the addition of a point to the candidate set is blocked because of other points already present in the set we use a backtracking approach to prune away such points. To speed up convergence we initialize our algorithm with the nearest pair of points from opposite classes. We then use an optimization based approach to increment or prune the candidate Support Vector set. The algorithm makes repeated passes over the data to satisfy the KKT constraints. The memory requirements of our algorithm scale as O(|S|) in the average case, where|S| is the size of the Support Vector set. We show that the algorithm is extremely competitive as compared to other conventional iterative algorithms like SMO and the NPA. We present results on a variety of real life datasets to validate our claims.",
"title": ""
},
{
"docid": "65fe1d49a386f62d467b2796a270510c",
"text": "The connection between human resources and performance in firms in the private sector is well documented. What is less clear is whether the move towards managerialism that has taken place within the Australian public sector during the last twenty years has brought with it some of the features of the relationships between Human Resource Management (HRM) and performance experienced within the private sector. The research begins with a review of the literature. In particular the conceptual thinking surrounding the connection between HRM and performance within private sector organisations is explored. Issues of concern are the direction of the relationship between HRM and performance and definitional questions as to the nature and level of HRM to be investigated and the measurement of performance. These conceptual issues are also debated within the context of a public sector and particularly the Australian environment. An outcome of this task is the specification of a set of appropriate parameters for a study of these linkages within Australian public sector organizations. Short Description The paper discusses the significance of strategic human resource management in relation to performance.",
"title": ""
},
{
"docid": "b77c65cf9fe637fc88752f6776a21e36",
"text": "This paper studies computer security from first principles. The basic questions \"Why?\", \"How do we know what we know?\" and \"What are the implications of what we believe?\"",
"title": ""
},
{
"docid": "8305594d16f0565e3a62cbb69821c485",
"text": "MOTIVATION\nAccurately predicting protein secondary structure and relative solvent accessibility is important for the study of protein evolution, structure and function and as a component of protein 3D structure prediction pipelines. Most predictors use a combination of machine learning and profiles, and thus must be retrained and assessed periodically as the number of available protein sequences and structures continues to grow.\n\n\nRESULTS\nWe present newly trained modular versions of the SSpro and ACCpro predictors of secondary structure and relative solvent accessibility together with their multi-class variants SSpro8 and ACCpro20. We introduce a sharp distinction between the use of sequence similarity alone, typically in the form of sequence profiles at the input level, and the additional use of sequence-based structural similarity, which uses similarity to sequences in the Protein Data Bank to infer annotations at the output level, and study their relative contributions to modern predictors. Using sequence similarity alone, SSpro's accuracy is between 79 and 80% (79% for ACCpro) and no other predictor seems to exceed 82%. However, when sequence-based structural similarity is added, the accuracy of SSpro rises to 92.9% (90% for ACCpro). Thus, by combining both approaches, these problems appear now to be essentially solved, as an accuracy of 100% cannot be expected for several well-known reasons. These results point also to several open technical challenges, including (i) achieving on the order of ≥ 80% accuracy, without using any similarity with known proteins and (ii) achieving on the order of ≥ 85% accuracy, using sequence similarity alone.\n\n\nAVAILABILITY AND IMPLEMENTATION\nSSpro, SSpro8, ACCpro and ACCpro20 programs, data and web servers are available through the SCRATCH suite of protein structure predictors at http://scratch.proteomics.ics.uci.edu.",
"title": ""
},
{
"docid": "3eb50289c3b28d2ce88052199d40bf8d",
"text": "Transportation Problem is an important aspect which has been widely studied in Operations Research domain. It has been studied to simulate different real life problems. In particular, application of this Problem in NPHard Problems has a remarkable significance. In this Paper, we present a comparative study of Transportation Problem through Probabilistic and Fuzzy Uncertainties. Fuzzy Logic is a computational paradigm that generalizes classical two-valued logic for reasoning under uncertainty. In order to achieve this, the notation of membership in a set needs to become a matter of degree. By doing this we accomplish two things viz., (i) ease of describing human knowledge involving vague concepts and (ii) enhanced ability to develop cost-effective solution to real-world problem. The multi-valued nature of Fuzzy Sets allows handling uncertain and vague information. It is a model-less approach and a clever disguise of Probability Theory. We give comparative simulation results of both approaches and discuss the Computational Complexity. To the best of our knowledge, this is the first work on comparative study of Transportation Problem using Probabilistic and Fuzzy Uncertainties.",
"title": ""
}
] |
scidocsrr
|
46ebfa26fb7981c876cf3c7a2cfae58d
|
Understanding Information
|
[
{
"docid": "aa32bff910ce6c7b438dc709b28eefe3",
"text": "Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for a e-mail: [email protected] 482 The European Physical Journal Special Topics urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science",
"title": ""
}
] |
[
{
"docid": "e59136e0d0a710643a078b58075bd8cd",
"text": "PURPOSE\nEpidemiological evidence suggests that chronic consumption of fruit-based flavonoids is associated with cognitive benefits; however, the acute effects of flavonoid-rich (FR) drinks on cognitive function in the immediate postprandial period require examination. The objective was to investigate whether consumption of FR orange juice is associated with acute cognitive benefits over 6 h in healthy middle-aged adults.\n\n\nMETHODS\nMales aged 30-65 consumed a 240-ml FR orange juice (272 mg) and a calorie-matched placebo in a randomized, double-blind, counterbalanced order on 2 days separated by a 2-week washout. Cognitive function and subjective mood were assessed at baseline (prior to drink consumption) and 2 and 6 h post consumption. The cognitive battery included eight individual cognitive tests. A standardized breakfast was consumed prior to the baseline measures, and a standardized lunch was consumed 3 h post-drink consumption.\n\n\nRESULTS\nChange from baseline analysis revealed that performance on tests of executive function and psychomotor speed was significantly better following the FR drink compared to the placebo. The effects of objective cognitive function were supported by significant benefits for subjective alertness following the FR drink relative to the placebo.\n\n\nCONCLUSIONS\nThese data demonstrate that consumption of FR orange juice can acutely enhance objective and subjective cognition over the course of 6 h in healthy middle-aged adults.",
"title": ""
},
{
"docid": "2690f802022b273d41b3131aa982b91b",
"text": "Deep neural networks are demonstrating excellent performance on several classical vision problems. However, these networks are vulnerable to adversarial examples, minutely modified images that induce arbitrary attacker-chosen output from the network. We propose a mechanism to protect against these adversarial inputs based on a generative model of the data. We introduce a pre-processing step that projects on the range of a generative model using gradient descent before feeding an input into a classifier. We show that this step provides the classifier with robustness against first-order, substitute model, and combined adversarial attacks. Using a min-max formulation, we show that there may exist adversarial examples even in the range of the generator, natural-looking images extremely close to the decision boundary for which the classifier has unjustifiedly high confidence. We show that adversarial training on the generative manifold can be used to make a classifier that is robust to these attacks. Finally, we show how our method can be applied even without a pre-trained generative model using a recent method called the deep image prior. We evaluate our method on MNIST, CelebA and Imagenet and show robustness against the current state of the art attacks.",
"title": ""
},
{
"docid": "1c5e17c7acff27e3b10aecf15c5809e7",
"text": "Recent years witness a growing interest in nonstandard epistemic logics of “knowing whether”, “knowing what”, “knowing how” and so on. These logics are usually not normal, i.e., the standard axioms and reasoning rules for modal logic may be invalid. In this paper, we show that the conditional “knowing value” logic proposed by Wang and Fan [10] can be viewed as a disguised normal modal logic by treating the negation of Kv operator as a special diamond. Under this perspective, it turns out that the original first-order Kripke semantics can be greatly simplified by introducing a ternary relation R i in standard Kripke models which associates one world with two i-accessible worlds that do not agree on the value of constant c. Under intuitive constraints, the modal logic based on such Kripke models is exactly the one studied by Wang and Fan [10,11]. Moreover, there is a very natural binary generalization of the “knowing value” diamond, which, surprisingly, does not increase the expressive power of the logic. The resulting logic with the binary diamond has a transparent normal modal system which sharpens our understanding of the “knowing value” logic and simplifies some previous hard problems.",
"title": ""
},
{
"docid": "0ee27f9045935db4241e9427bed2af59",
"text": "As a new generation of deep-sea Autonomous Underwater Vehicle (AUV), Qianlong I is a 6000m rated glass deep-sea manganese nodules detection AUV which based on the CR01 and the CR02 deep-sea AUVs and developed by Shenyang Institute of Automation, the Chinese Academy of Sciences from 2010. The Qianlong I was tested in the thousand-isles lake in Zhejiang Province of China during November 2012 to March 2013 and the sea trials were conducted in the South China Sea during April 20-May 2, 2013 after the lake tests and the ocean application completed in October 2013. This paper describes two key problems encountered in the process of developing Qianlong I, including the launch and recovery systems development and variable buoyancy system development. Results from the recent lake and sea trails are presented, and future missions and development plans are discussed.",
"title": ""
},
{
"docid": "98d1c35aeca5de703cec468b2625dc72",
"text": "Congenital adrenal hyperplasia was described in London by Phillips (1887) who reported four cases of spurious hermaphroditism in one family. Fibiger (1905) noticed that there was enlargement of the adrenal glands in some infants who had died after prolonged vomiting and dehydration. Butler, Ross, and Talbot (1939) reported a case which showed serum electrolyte changes similar to those of Addison's disease. Further developments had to await the synthesis of cortisone. The work ofWilkins, Lewis, Klein, and Rosemberg (1950) showed that cortisone could alleviate the disorder and suppress androgen secretion. Bartter, Albright, Forbes, Leaf, Dempsey, and Carroll (1951) suggested that, in congenital adrenal hyperplasia, there might be a primary impairment of synthesis of cortisol (hydrocortisone, compound F) and a secondary rise of pituitary adrenocorticotrophin (ACTH) production. This was confirmed by Jailer, Louchart, and Cahill (1952) who showed that ACTH caused little increase in the output of cortisol in such cases. In the same year, Snydor, Kelley, Raile, Ely, and Sayers (1953) found an increased level ofACTH in the blood of affected patients. Studies of enzyme systems were carried out. Jailer, Gold, Vande Wiele, and Lieberman (1955) and Frantz, Holub, and Jailer (1960) produced evidence that the most common site for the biosynthetic block was in the C-21 hydroxylating system. Eberlein and Bongiovanni (1955) showed that there was a C-l 1 hydroxylation defect in patients with the hypertensive form of congenital adrenal hyperplasia, and Bongiovanni (1961) and Bongiovanni and Kellenbenz (1962), showed that in some patients there was a further type of enzyme defect, a 3-(-hydroxysteroid dehydrogenase deficiency, an enzyme which is required early in the metabolic pathway. Prader and Siebenmann (1957) described a female infant who had adrenal insufficiency and congenital lipoid hyperplasia of the",
"title": ""
},
{
"docid": "2466ac1ce3d54436f74b5bb024f89662",
"text": "In this paper we discuss our work on applying media theory to the creation of narrative augmented reality (AR) experiences. We summarize the concepts of remediation and media forms as they relate to our work, argue for their importance to the development of a new medium such as AR, and present two example AR experiences we have designed using these conceptual tools. In particular, we focus on leveraging the interaction between the physical and virtual world, remediating existing media (film, stage and interactive CD-ROM), and building on the cultural expectations of our users.",
"title": ""
},
{
"docid": "bf03f941bcf921a44d0a34ec2161ee34",
"text": "Epidermolytic ichthyosis (EI) is a rare autosomal dominant genodermatosis that presents at birth as a bullous disease, followed by a lifelong ichthyotic skin disorder. Essentially, it is a defective keratinization caused by mutations of keratin 1 (KRT1) or keratin 10 (KRT10) genes, which lead to skin fragility, blistering, and eventually hyperkeratosis. Successful management of EI in the newborn period can be achieved through a thoughtful, directed, and interdisciplinary or multidisciplinary approach that encompasses family support. This condition requires meticulous care to avoid associated morbidities such as infection and dehydration. A better understanding of the disrupted barrier protection of the skin in these patients provides a basis for management with daily bathing, liberal emollients, pain control, and proper nutrition as the mainstays of treatment. In addition, this case presentation will include discussions on the pathophysiology, complications, differential diagnosis, and psychosocial and ethical issues.",
"title": ""
},
{
"docid": "b8b96789191e5afa48bea1d9e92443d5",
"text": "Methionine, cysteine, homocysteine, and taurine are the 4 common sulfur-containing amino acids, but only the first 2 are incorporated into proteins. Sulfur belongs to the same group in the periodic table as oxygen but is much less electronegative. This difference accounts for some of the distinctive properties of the sulfur-containing amino acids. Methionine is the initiating amino acid in the synthesis of virtually all eukaryotic proteins; N-formylmethionine serves the same function in prokaryotes. Within proteins, many of the methionine residues are buried in the hydrophobic core, but some, which are exposed, are susceptible to oxidative damage. Cysteine, by virtue of its ability to form disulfide bonds, plays a crucial role in protein structure and in protein-folding pathways. Methionine metabolism begins with its activation to S-adenosylmethionine. This is a cofactor of extraordinary versatility, playing roles in methyl group transfer, 5'-deoxyadenosyl group transfer, polyamine synthesis, ethylene synthesis in plants, and many others. In animals, the great bulk of S-adenosylmethionine is used in methylation reactions. S-Adenosylhomocysteine, which is a product of these methyltransferases, gives rise to homocysteine. Homocysteine may be remethylated to methionine or converted to cysteine by the transsulfuration pathway. Methionine may also be metabolized by a transamination pathway. This pathway, which is significant only at high methionine concentrations, produces a number of toxic endproducts. Cysteine may be converted to such important products as glutathione and taurine. Taurine is present in many tissues at higher concentrations than any of the other amino acids. It is an essential nutrient for cats.",
"title": ""
},
{
"docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db",
"text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.",
"title": ""
},
{
"docid": "372182b4ac2681ceedb9d78e9f38343d",
"text": "A 12-bit 10-GS/s interleaved (IL) pipeline analog-to-digital converter (ADC) is described in this paper. The ADC achieves a signal to noise and distortion ratio (SNDR) of 55 dB and a spurious free dynamic range (SFDR) of 66 dB with a 4-GHz input signal, is fabricated in the 28-nm CMOS technology, and dissipates 2.9 W. Eight pipeline sub-ADCs are interleaved to achieve 10-GS/s sample rate, and mismatches between sub-ADCs are calibrated in the background. The pipeline sub-ADCs employ a variety of techniques to lower power, like avoiding a dedicated sample-and-hold amplifier (SHA-less), residue scaling, flash background calibration, dithering and inter-stage gain error background calibration. A push–pull input buffer optimized for high-frequency linearity drives the interleaved sub-ADCs to enable >7-GHz bandwidth. A fast turn-ON bootstrapped switch enables 100-ps sampling. The ADC also includes the ability to randomize the sub-ADC selection pattern to further reduce residual interleaving spurs.",
"title": ""
},
{
"docid": "eb956188486caa595b7f38d262781af7",
"text": "Due to the competitiveness of the computing industry, software developers are pressured to quickly deliver new code releases. At the same time, operators are expected to update and keep production systems stable at all times. To overcome the development–operations barrier, organizations have started to adopt Infrastructure as Code (IaC) tools to efficiently deploy middleware and applications using automation scripts. These automations comprise a series of steps that should be idempotent to guarantee repeatability and convergence. Rigorous testing is required to ensure that the system idempotently converges to a desired state, starting from arbitrary states. We propose and evaluate a model-based testing framework for IaC. An abstracted system model is utilized to derive state transition graphs, based on which we systematically generate test cases for the automation. The test cases are executed in light-weight virtual machine environments. Our prototype targets one popular IaC tool (Chef), but the approach is general. We apply our framework to a large base of public IaC scripts written by operators, showing that it correctly detects non-idempotent automations.",
"title": ""
},
{
"docid": "b3790611437e1660b7c222adcb26b510",
"text": "There have been increasing interests in the robotics community in building smaller and more agile autonomous micro aerial vehicles (MAVs). In particular, the monocular visual-inertial system (VINS) that consists of only a camera and an inertial measurement unit (IMU) forms a great minimum sensor suite due to its superior size, weight, and power (SWaP) characteristics. In this paper, we present a tightly-coupled nonlinear optimization-based monocular VINS estimator for autonomous rotorcraft MAVs. Our estimator allows the MAV to execute trajectories at 2 m/s with roll and pitch angles up to 30 degrees. We present extensive statistical analysis to verify the performance of our approach in different environments with varying flight speeds.",
"title": ""
},
{
"docid": "7f61235bb8b77376936256dcf251ee0b",
"text": "These practical guidelines for the biological treatment of personality disorders in primary care settings were developed by an international Task Force of the World Federation of Societies of Biological Psychiatry (WFSBP). They embody the results of a systematic review of all available clinical and scientific evidence pertaining to the biological treatment of three specific personality disorders, namely borderline, schizotypal and anxious/avoidant personality disorder in addition to some general recommendations for the whole field. The guidelines cover disease definition, classification, epidemiology, course and current knowledge on biological underpinnings, and provide a detailed overview on the state of the art of clinical management. They deal primarily with biological treatment (including antidepressants, neuroleptics, mood stabilizers and some further pharmacological agents) and discuss the relative significance of medication within the spectrum of treatment strategies that have been tested for patients with personality disorders, up to now. The recommendations should help the clinician to evaluate the efficacy spectrum of psychotropic drugs and therefore to select the drug best suited to the specific psychopathology of an individual patient diagnosed for a personality disorder.",
"title": ""
},
{
"docid": "0122057f9fd813efd9f9e0db308fe8d9",
"text": "Noun phrases in queries are identified and classified into four types: proper names, dictionary phrases, simple phrases and complex phrases. A document has a phrase if all content words in the phrase are within a window of a certain size. The window sizes for different types of phrases are different and are determined using a decision tree. Phrases are more important than individual terms. Consequently, documents in response to a query are ranked with matching phrases given a higher priority. We utilize WordNet to disambiguate word senses of query terms. Whenever the sense of a query term is determined, its synonyms, hyponyms, words from its definition and its compound words are considered for possible additions to the query. Experimental results show that our approach yields between 23% and 31% improvements over the best-known results on the TREC 9, 10 and 12 collections for short (title only) queries, without using Web data.",
"title": ""
},
{
"docid": "5416e2a3f5a1855f19814eecec85092a",
"text": "Code clones are exactly or nearly similar code fragments in the code-base of a software system. Existing studies show that clones are directly related to bugs and inconsistencies in the code-base. Code cloning (making code clones) is suspected to be responsible for replicating bugs in the code fragments. However, there is no study on the possibilities of bug-replication through cloning process. Such a study can help us discover ways of minimizing bug-replication. Focusing on this we conduct an empirical study on the intensities of bug-replication in the code clones of the major clone-types: Type 1, Type 2, and Type 3. According to our investigation on thousands of revisions of six diverse subject systems written in two different programming languages, C and Java, a considerable proportion (i.e., up to 10%) of the code clones can contain replicated bugs. Both Type 2 and Type 3 clones have higher tendencies of having replicated bugs compared to Type 1 clones. Thus, Type 2 and Type 3 clones are more important from clone management perspectives. The extent of bug-replication in the buggy clone classes is generally very high (i.e., 100% in most of the cases). We also find that overall 55% of all the bugs experienced by the code clones can be replicated bugs. Our study shows that replication of bugs through cloning is a common phenomenon. Clone fragments having method-calls and if-conditions should be considered for refactoring with high priorities, because such clone fragments have high possibilities of containing replicated bugs. We believe that our findings are important for better maintenance of software systems, in particular, systems with code clones.",
"title": ""
},
{
"docid": "ea95f4475bb65f7ea0f270387919df47",
"text": "The field of supramolecular chemistry focuses on the non-covalent interactions between molecules that give rise to molecular recognition and self-assembly processes. Since most non-covalent interactions are relatively weak and form and break without significant activation barriers, many supramolecular systems are under thermodynamic control. Hence, traditionally, supramolecular chemistry has focused predominantly on systems at equilibrium. However, more recently, self-assembly processes that are governed by kinetics, where the outcome of the assembly process is dictated by the assembly pathway rather than the free energy of the final assembled state, are becoming topical. Within the kinetic regime it is possible to distinguish between systems that reside in a kinetic trap and systems that are far from equilibrium and require a continuous supply of energy to maintain a stationary state. In particular, the latter systems have vast functional potential, as they allow, in principle, for more elaborate structural and functional diversity of self-assembled systems - indeed, life is a prime example of a far-from-equilibrium system. In this Review, we compare the different thermodynamic regimes using some selected examples and discuss some of the challenges that need to be addressed when developing new functional supramolecular systems.",
"title": ""
},
{
"docid": "4d87a5793186fc1dcaa51abcc06135a7",
"text": "PURPOSE OF REVIEW\nArboviruses have been associated with central and peripheral nervous system injuries, in special the flaviviruses. Guillain-Barré syndrome (GBS), transverse myelitis, meningoencephalitis, ophthalmological manifestations, and other neurological complications have been recently associated to Zika virus (ZIKV) infection. In this review, we aim to analyze the epidemiological aspects, possible pathophysiology, and what we have learned about the clinical and laboratory findings, as well as treatment of patients with ZIKV-associated neurological complications.\n\n\nRECENT FINDINGS\nIn the last decades, case series have suggested a possible link between flaviviruses and development of GBS. Recently, large outbreaks of ZIKV infection in Asia and the Americas have led to an increased incidence of GBS in these territories. Rapidly, several case reports and case series have reported an increase of all clinical forms and electrophysiological patterns of GBS, also including cases with associated central nervous system involvement. Finally, cases suggestive of acute transient polyneuritis, as well as acute and progressive postinfectious neuropathies associated to ZIKV infection have been reported, questioning the usually implicated mechanisms of neuronal injury.\n\n\nSUMMARY\nThe recent ZIKV outbreaks have triggered the occurrence of a myriad of neurological manifestations likely associated to this arbovirosis, in special GBS and its variants.",
"title": ""
},
{
"docid": "f312bfe7f80fdf406af29bfde635fa36",
"text": "In two studies, a newly devised test (framed-line test) was used to examine the hypothesis that individuals engaging in Asian cultures are more capable of incorporating contextual information and those engaging in North American cultures are more capable of ignoring contextual information. On each trial, participants were presented with a square frame, within which was printed a vertical line. Participants were then shown another square frame of the same or different size and asked to draw a line that was identical to the first line in either absolute length (absolute task) or proportion to the height of the surrounding frame (relative task). The results supported the hypothesis: Whereas Japanese were more accurate in the relative task, Americans were more accurate in the absolute task. Moreover, when engaging in another culture, individuals tended to show the cognitive characteristic common in the host culture.",
"title": ""
},
{
"docid": "b213afb537bbc4c476c760bb8e8f2997",
"text": "Recommender system has been demonstrated as one of the most useful tools to assist users' decision makings. Several recommendation algorithms have been developed and implemented by both commercial and open-source recommendation libraries. Context-aware recommender system (CARS) emerged as a novel research direction during the past decade and many contextual recommendation algorithms have been proposed. Unfortunately, no recommendation engines start to embed those algorithms in their kits, due to the special characteristics of the data format and processing methods in the domain of CARS. This paper introduces an open-source Java-based context-aware recommendation engine named as CARSKit which is recognized as the 1st open source recommendation library specifically designed for CARS. It implements the state-of-the-art context-aware recommendation algorithms, and we will showcase the ease with which CARSKit allows recommenders to be configured and evaluated in this demo.",
"title": ""
},
{
"docid": "101c03b85e3cc8518a158d89cc9b3b39",
"text": "Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.",
"title": ""
}
] |
scidocsrr
|
d98f68cc59d1386a2b1207517090fc87
|
Improving Question Answering with External Knowledge
|
[
{
"docid": "e79679c3ed82c1c7ab83cfc4d6e0280e",
"text": "Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).",
"title": ""
},
{
"docid": "d5d03cdfd3a6d6c2b670794d76e91c8e",
"text": "We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students’ ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43%) and the ceiling human performance (95%). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at http://www.cs.cmu.edu/ ̃glai1/data/race/ and the code is available at https://github.com/ cheezer/RACE_AR_baselines.",
"title": ""
},
{
"docid": "fe39547650623fbf86be3da46a6c5a8b",
"text": "This paper describes our system for SemEval2018 Task 11: Machine Comprehension using Commonsense Knowledge (Ostermann et al., 2018b). We use Threeway Attentive Networks (TriAN) to model interactions between the passage, question and answers. To incorporate commonsense knowledge, we augment the input with relation embedding from the graph of general knowledge ConceptNet (Speer et al., 2017). As a result, our system achieves state-of-the-art performance with 83.95% accuracy on the official test data. Code is publicly available at https://github.com/ intfloat/commonsense-rc.",
"title": ""
},
{
"docid": "8f3d86a21b8a19c7d3add744c2e5e202",
"text": "Question answering (QA) systems are easily distracted by irrelevant or redundant words in questions, especially when faced with long or multi-sentence questions in difficult domains. This paper introduces and studies the notion of essential question terms with the goal of improving such QA solvers. We illustrate the importance of essential question terms by showing that humans’ ability to answer questions drops significantly when essential terms are eliminated from questions. We then develop a classifier that reliably (90% mean average precision) identifies and ranks essential terms in questions. Finally, we use the classifier to demonstrate that the notion of question term essentiality allows state-of-the-art QA solvers for elementary-level science questions to make better and more informed decisions, improving performance by up to 5%. We also introduce a new dataset of over 2,200 crowd-sourced essential terms annotated science questions.",
"title": ""
}
] |
[
{
"docid": "69e87ea7f07f96088486b7dd9105841b",
"text": "When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks.",
"title": ""
},
{
"docid": "432ff163e4dded948aa5a27aa440cd30",
"text": "Eighty-one female and sixty-seven male undergraduates at a Malaysian university, from seven faculties and a Center for Language Studies completed a Computer Self-Efficacy Scale, Computer Anxiety Scale, and an Attitudes toward the Internet Scale and give information about their use of the Internet. This survey research investigated undergraduates’ computer anxiety, computer self-efficacy, and reported use of and attitudes toward the Internet. This study also examined differences in computer anxiety, computer selfefficacy, attitudes toward the Internet and reported use of the Internet for undergraduates with different demographic variables. The findings suggest that the undergraduates had moderate computer anxiousness, medium attitudes toward the Internet, and high computer self-efficacy and used the Internet extensively for educational purposes such as doing research, downloading electronic resources and e-mail communications. This study challenges the long perceived male bias in the computer environment and supports recent studies that have identified greater gender equivalence in interest, use, and skills levels. However, there were differences in undergraduates’ Internet usage levels based on the discipline of study. Furthermore, higher levels of Internet usage did not necessarily translate into better computer self-efficacy among the undergraduates. A more important factor in determining computer self-efficacy could be the discipline of study and undergraduates studying computer related disciplines appeared to have higher self-efficacy towards computers and the Internet. Undergraduates who used the Internet more often may not necessarily feel more comfortable using them. Possibly, other factors such as the types of application used, the purpose for using, and individual satisfaction could also influence computer self-efficacy and computer anxiety. However, although Internet usage levels may not have any impact on computer self-efficacy, higher usage of the Internet does seem to decrease the levels of computer anxiety among the undergraduates. Undergraduates with lower computer anxiousness demonstrated more positive attitudes toward the Internet in this study.",
"title": ""
},
{
"docid": "e37b3a68c850d1fb54c9030c22b5792f",
"text": "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.",
"title": ""
},
{
"docid": "602ccb25257c6ce6c0bca2cb81c00628",
"text": "The detection and tracking of moving vehicles is a necessity for collision-free navigation. In natural unstructured environments, motion-based detection is challenging due to low signal to noise ratio. This paper describes our approach for a 14 km/h fast autonomous outdoor robot that is equipped with a Velodyne HDL-64E S2 for environment perception. We extend existing work that has proven reliable in urban environments. To overcome the unavailability of road network information for background separation, we introduce a foreground model that incorporates geometric as well as temporal cues. Local shape estimates successfully guide vehicle localization. Extensive evaluation shows that the system works reliably and efficiently in various outdoor scenarios without any prior knowledge about the road network. Experiments with our own sensor as well as on publicly available data from the DARPA Urban Challenge revealed more than 96% correctly identified vehicles.",
"title": ""
},
{
"docid": "8b7f931e800cd1ae810453ecbc35b225",
"text": "In this paper we present empirical results from a study examining the effects of antenna diversity and placement on vehicle-to-vehicle link performance in vehicular ad hoc networks. The experiments use roof- and in-vehicle mounted omni-directional antennas and IEEE 802.11a radios operating in the 5 GHz band, which is of interest for planned inter-vehicular communication standards. Our main findings are two-fold. First, we show that radio reception performance is sensitive to antenna placement in the 5 Ghz band. Second, our results show that, surprisingly, a packet level selection diversity scheme using multiple antennas and radios, multi-radio packet selection (MRPS), improves performance not only in a fading channel but also in line-of-sight conditions. This is due to propagation being affected by car geometry, leading to the highly non-uniform antenna patterns. These patterns are very sensitive to the exact antenna position on the roof, for example at a transmit power of 40 mW the line-of-sight communication range varied between 50 and 250 m depending on the orientation of the cars. These findings have implications for vehicular MAC protocol design. Protocols may have to cope with an increased number of hidden nodes due to the directional antenna patterns. However, car makers can reduce these effects through careful antenna placement and diversity.",
"title": ""
},
{
"docid": "fe48a551dfbe397b7bcf52e534dfcf00",
"text": "This meta-analysis of 12 dependent variables from 9 quantitative studies comparing music to no-music conditions during treatment of children and adolescents with autism resulted in an overall effect size of d =.77 and a mean weighted correlation of r =.36 (p =.00). Since the confidence interval did not include 0, results were considered to be significant. All effects were in a positive direction, indicating benefits of the use of music in intervention. The homogeneity Q value was not significant (p =.83); therefore, results of included studies are considered to be homogeneous and explained by the overall effect size. The significant effect size, combined with the homogeneity of the studies, leads to the conclusion that all music intervention, regardless of purpose or implementation, has been effective for children and adolescents with autism. Included studies are described in terms of type of dependent variables measured; theoretical approach; number of subjects in treatment sessions; participation in and use, selection, and presentation of music; researcher discipline; published or unpublished source; and subject age. Clinical implications as well as recommendations for future research are discussed.",
"title": ""
},
{
"docid": "73e4f93a46d8d66599aaaeaf71c8efe2",
"text": "The galvanometer-based scanners (GS) are oscillatory optical systems utilized in high-end biomedical technologies. From a control point-of-view the GSs are mechatronic systems (mainly positioning servo-systems) built usually in a close loop structure and controlled by different control algorithms. The paper presents a Model based Predictive Control (MPC) solution for the mobile equipment (moving magnet and galvomirror) of a GS. The development of a high-performance control solution is based to a basic closed loop GS which consists of a PD-L1 controller and a servomotor. The mathematical model (MM) and the parameters of the basic construction are identified using a theoretical approach followed by an experimental identification. The equipment is used in our laboratory for better dynamical performances for biomedical imaging systems. The control solutions proposed are supported by simulations carried out in Matlab/Simulink.",
"title": ""
},
{
"docid": "cb7dda8f4059e5a66e4a6e26fcda601e",
"text": "Purpose – This UK-based research aims to build on the US-based work of Keller and Aaker, which found a significant association between “company credibility” (via a brand’s “expertise” and “trustworthiness”) and brand extension acceptance, hypothesising that brand trust, measured via two correlate dimensions, is significantly related to brand extension acceptance. Design/methodology/approach – Discusses brand extension and various prior, validated influences on its success. Focuses on the construct of trust and develops hypotheses about the relationship of brand trust with brand extension acceptance. The hypotheses are then tested on data collected from consumers in the UK. Findings – This paper, using 368 consumer responses to nine, real, low involvement UK product and service brands, finds support for a significant association between the variables, comparable in strength with that between media weight and brand share, and greater than that delivered by the perceived quality level of the parent brand. Originality/value – The research findings, which develop a sparse literature in this linkage area, are of significance to marketing practitioners, since brand trust, already associated with brand equity and brand loyalty, and now with brand extension, needs to be managed and monitored with care. The paper prompts further investigation of the relationship between brand trust and brand extension acceptance in other geographic markets and with other higher involvement categories.",
"title": ""
},
{
"docid": "1ea21d88740aa6b2712205823f141e57",
"text": "AIM\nOne of the critical aspects of esthetic dentistry is creating geometric or mathematical proportions to relate the successive widths of the anterior teeth. The golden proportion, the recurring esthetic dental (RED) proportion, and the golden percentage are theories introduced in this field. The aim of this study was to investigate the existence of the golden proportion, RED proportion, and the golden percentage between the widths of the maxillary anterior teeth in individuals with natural dentition.\n\n\nMETHODS AND MATERIALS\nStandardized frontal images of 376 dental student smiles were captured. The images were transferred to a personal computer, the widths of the maxillary anterior teeth were measured, and calculations were made according to each of the above mentioned theories. The data were statistically analyzed using paired student T-test (level of significance P<0.05).\n\n\nRESULTS\nThe golden proportion was found to be accurate between the width of the right central and lateral incisors in 31.3% of men and 27.1% of women. The values of the RED proportion were not constant, and the farther the one moves distally from the midline the higher the values. Furthermore, the results revealed the golden percentage was rather constant in terms of relative tooth width. The width of the central incisor represents 23%, the lateral incisor 15%, and the canine 12% of the width of the six maxillary anterior teeth as viewed from the front.\n\n\nCONCLUSIONS\nBoth the golden proportion and the RED proportion are unsuitable methods to relate the successive widths of the maxillary anterior teeth. However, the golden percentage theory seems to be applicable to relate the successive widths of the maxillary anterior teeth if percentages are adjusted taking into consideration the ethnicity of the population.",
"title": ""
},
{
"docid": "543a0cd5ac9aae173a1af5c3215b002f",
"text": "Situated question answering is the problem of answering questions about an environment such as an image or diagram. This problem requires jointly interpreting a question and an environment using background knowledge to select the correct answer. We present Parsing to Probabilistic Programs (P ), a novel situated question answering model that can use background knowledge and global features of the question/environment interpretation while retaining efficient approximate inference. Our key insight is to treat semantic parses as probabilistic programs that execute nondeterministically and whose possible executions represent environmental uncertainty. We evaluate our approach on a new, publicly-released data set of 5000 science diagram questions, outperforming several competitive classical and neural baselines.",
"title": ""
},
{
"docid": "bbc984f02b81ee66d7dc617ed34a7e98",
"text": "Packet losses are common in data center networks, may be caused by a variety of reasons (e.g., congestion, blackhole), and have significant impacts on application performance and network operations. Thus, it is important to provide fast detection of packet losses independent of their root causes. We also need to capture both the locations and packet header information of the lost packets to help diagnose and mitigate these losses. Unfortunately, existing monitoring tools that are generic in capturing all types of network events often fall short in capturing losses fast with enough details and low overhead. Due to the importance of loss in data centers, we propose a specific monitoring system designed for loss detection. We propose LossRadar, a system that can capture individual lost packets and their detailed information in the entire network on a fine time scale. Our extensive evaluation on prototypes and simulations demonstrates that LossRadar is easy to implement in hardware switches, achieves low memory and bandwidth overhead, while providing detailed information about individual lost packets. We also build a loss analysis tool that demonstrates the usefulness of LossRadar with a few example applications.",
"title": ""
},
{
"docid": "ee532e8bb51a7b49506df59bd9ad3282",
"text": "People learn from tests. Providing tests often enhances retention more than additional study opportunities, but is this testing effect mediated by processes related to retrieval that are fundamentally different from study processes? Some previous studies have reported that testing enhances retention relative to additional studying, but only after a relatively long retention interval. To the extent that this interaction with retention interval dissociates the effects of studying and testing, it may provide crucial evidence for different underlying processes. However, these findings can be questioned because of methodological differences between the study and the test conditions. In two experiments, we eliminated or minimized the confounds that rendered the previous findings equivocal and still obtained the critical interaction. Our results strengthen the evidence for the involvement of different processes underlying the effects of studying and testing, and support the hypothesis that the testing effect is grounded in retrieval-related processes.",
"title": ""
},
{
"docid": "bff21b4a0bc4e7cc6918bc7f107a5ca5",
"text": "This paper discusses driving system design based on traffic rules. This allows fully automated driving in an environment with human drivers, without necessarily changing equipment on other vehicles or infrastructure. It also facilitates cooperation between the driving system and the host driver during highly automated driving. The concept, referred to as legal safety, is illustrated for highly automated driving on highways with distance keeping, intelligent speed adaptation, and lane-changing functionalities. Requirements by legal safety on perception and control components are discussed. This paper presents the actual design of a legal safety decision component, which predicts object trajectories and calculates optimal subject trajectories. System implementation on automotive electronic control units and results on vehicle and simulator are discussed.",
"title": ""
},
{
"docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf",
"text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.",
"title": ""
},
{
"docid": "16fec520bf539ab23a5164ffef5561b4",
"text": "This article traces the major trends in TESOL methods in the past 15 years. It focuses on the TESOL profession’s evolving perspectives on language teaching methods in terms of three perceptible shifts: (a) from communicative language teaching to task-based language teaching, (b) from method-based pedagogy to postmethod pedagogy, and (c) from systemic discovery to critical discourse. It is evident that during this transitional period, the profession has witnessed a heightened awareness about communicative and task-based language teaching, about the limitations of the concept of method, about possible postmethod pedagogies that seek to address some of the limitations of method, about the complexity of teacher beliefs that inform the practice of everyday teaching, and about the vitality of the macrostructures—social, cultural, political, and historical—that shape the microstructures of the language classroom. This article deals briefly with the changes and challenges the trend-setting transition seems to be bringing about in the profession’s collective thought and action.",
"title": ""
},
{
"docid": "9420760d6945440048cee3566ce96699",
"text": "In this work, we develop a computer vision based fall prevention system for hospital ward application. To prevent potential falls, once the event of patient get up from the bed is automatically detected, nursing staffs are alarmed immediately for assistance. For the detection task, we use a RGBD sensor (Microsoft Kinect). The geometric prior knowledge is exploited by identifying a set of task-specific feature channels, e.g., regions of interest. Extensive motion and shape features from both color and depth image sequences are extracted. Features from multiple modalities and channels are fused via a multiple kernel learning framework for training the event detector. Experimental results demonstrate the high accuracy and efficiency achieved by the proposed system.",
"title": ""
},
{
"docid": "76502e21fbb777a3442928897ef271f0",
"text": "Staphylococcus saprophyticus (S. saprophyticus) is a Gram-positive, coagulase-negative facultative bacterium belongs to Micrococcaceae family. It is a unique uropathogen associated with uncomplicated urinary tract infections (UTIs), especially cystitis in young women. Young women are very susceptible to colonize this organism in the urinary tracts and it is spread through sexual intercourse. S. saprophyticus is the second most common pathogen after Escherichia coli causing 10-20% of all UTIs in sexually active young women [13]. It contains the urease enzymes that hydrolyze the urea to produce ammonia. The urease activity is the main factor for UTIs infection. Apart from urease activity it has numerous transporter systems to adjust against change in pH, osmolarity, and concentration of urea in human urine [2]. After severe infections, it causes various complications such as native valve endocarditis [4], pyelonephritis, septicemia, [5], and nephrolithiasis [6]. About 150 million people are diagnosed with UTIs each year worldwide [7]. Several virulence factors includes due to the adherence to urothelial cells by release of lipoteichoic acid is a surface-associated adhesion amphiphile [8], a hemagglutinin that binds to fibronectin and hemagglutinates sheep erythrocytes [9], a hemolysin; and production of extracellular slime are responsible for resistance properties of S. saprophyticus [1]. Based on literature, S. saprophyticus strains are susceptible to vancomycin, rifampin, gentamicin and amoxicillin-clavulanic, while resistance to other antimicrobials such as erythromycin, clindamycin, fluoroquinolones, chloramphenicol, trimethoprim/sulfamethoxazole, oxacillin, and Abstract",
"title": ""
},
{
"docid": "ceef658faa94ad655521ece5ac5cba1d",
"text": "We propose learning a semantic visual feature representation by training a neural network supervised solely by point and object trajectories in video sequences. Currently, the predominant paradigm for learning visual features involves training deep convolutional networks on an image classification task using very large human-annotated datasets, e.g. ImageNet. Though effective as supervision, semantic image labels are costly to obtain. On the other hand, under high enough frame rates, frame-to-frame associations between the same 3D physical point or an object can be established automatically. By transitivity, such associations grouped into tracks can relate object/point appearance across large changes in pose, illumination and camera viewpoint, providing a rich source of invariance that can be used for training. We train a siamese network we call it AssociationNet to discriminate between correct and wrong associations between patches in different frames of a video sequence. We show that AssociationNet learns useful features when used as pretraining for object recognition in static images, and outperforms random weight initialization and alternative pretraining methods.",
"title": ""
},
{
"docid": "d00957d93af7b2551073ba84b6c0f2a6",
"text": "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1× and 3.1× speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by ∼ 1%. Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn",
"title": ""
},
{
"docid": "1c3cf3ccdb3b7129c330499ca909b193",
"text": "Procedural methods for animating turbulent fluid are often preferred over simulation, both for speed and for the degree of animator control. We offer an extremely simple approach to efficiently generating turbulent velocity fields based on Perlin noise, with a formula that is exactly incompressible (necessary for the characteristic look of everyday fluids), exactly respects solid boundaries (not allowing fluid to flow through arbitrarily-specified surfaces), and whose amplitude can be modulated in space as desired. In addition, we demonstrate how to combine this with procedural primitives for flow around moving rigid objects, vortices, etc.",
"title": ""
}
] |
scidocsrr
|
8dbeb1c275a094146b26ea1ab3e314cc
|
Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling
|
[
{
"docid": "93f1ee5523f738ab861bcce86d4fc906",
"text": "Semantic role labeling (SRL) is one of the basic natural language processing (NLP) problems. To this date, most of the successful SRL systems were built on top of some form of parsing results (Koomen et al., 2005; Palmer et al., 2010; Pradhan et al., 2013), where pre-defined feature templates over the syntactic structure are used. The attempts of building an end-to-end SRL learning system without using parsing were less successful (Collobert et al., 2011). In this work, we propose to use deep bi-directional recurrent network as an end-to-end system for SRL. We take only original text information as input feature, without using any syntactic knowledge. The proposed algorithm for semantic role labeling was mainly evaluated on CoNLL-2005 shared task and achieved F1 score of 81.07. This result outperforms the previous state-of-the-art system from the combination of different parsing trees or models. We also obtained the same conclusion with F1 = 81.27 on CoNLL2012 shared task. As a result of simplicity, our model is also computationally efficient that the parsing speed is 6.7k tokens per second. Our analysis shows that our model is better at handling longer sentences than traditional models. And the latent variables of our model implicitly capture the syntactic structure of a sentence.",
"title": ""
},
{
"docid": "b10447097f8d513795b4f4e08e1838d8",
"text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.",
"title": ""
},
{
"docid": "60d21d395c472eb36bdfd014c53d918a",
"text": "We introduce a fully differentiable approximation to higher-order inference for coreference resolution. Our approach uses the antecedent distribution from a span-ranking architecture as an attention mechanism to iteratively refine span representations. This enables the model to softly consider multiple hops in the predicted clusters. To alleviate the computational cost of this iterative process, we introduce a coarse-to-fine approach that incorporates a less accurate but more efficient bilinear factor, enabling more aggressive pruning without hurting accuracy. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the English OntoNotes benchmark, while being far more computationally efficient.",
"title": ""
}
] |
[
{
"docid": "d0f50fa4ef375759dcac7270b006f147",
"text": "Automatic separation of signatures from a document page involves difficult challenges due to the free-flow nature of handwriting, overlapping/touching of signature parts with printed text, noise, etc. In this paper, we have proposed a novel approach for the segmentation of signatures from machine printed signed documents. The algorithm first locates the signature block in the document using word level feature extraction. Next, the signature strokes that touch or overlap with the printed texts are separated. A stroke level classification is then performed using skeleton analysis to separate the overlapping strokes of printed text from the signature. Gradient based features and Support Vector Machine (SVM) are used in our scheme. Finally, a Conditional Random Field (CRF) model energy minimization concept based on approximated labeling by graph cut is applied to label the strokes as \"signature\" or \"printed text\" for accurate segmentation of signatures. Signature segmentation experiment is performed in \"tobacco\" dataset1 and we have obtained encouraging results.",
"title": ""
},
{
"docid": "81c02e708a21532d972aca0b0afd8bb5",
"text": "We propose a new tree-based ORAM scheme called Circuit ORAM. Circuit ORAM makes both theoretical and practical contributions. From a theoretical perspective, Circuit ORAM shows that the well-known Goldreich-Ostrovsky logarithmic ORAM lower bound is tight under certain parameter ranges, for several performance metrics. Therefore, we are the first to give an answer to a theoretical challenge that remained open for the past twenty-seven years. Second, Circuit ORAM earns its name because it achieves (almost) optimal circuit size both in theory and in practice for realistic choices of block sizes. We demonstrate compelling practical performance and show that Circuit ORAM is an ideal candidate for secure multi-party computation applications.",
"title": ""
},
{
"docid": "bb8115f8c172e22bd0ff70bd079dfa98",
"text": "This paper reports on the second generation of the Pleated Pneumatic Artificial Muscle (PPAM) which has been developed to extend the life span of its first prototype. This type of artificial was developed to overcome dry friction and material deformation which is present in the widely used McKibben type of artificial muscle. The essence of the PPAM is its pleated membrane structure which enables the muscle to work at low pressures and at large contractions. There is a growing interest in this kind of actuation for robotics applications due to its high power to weight ratio and the adaptable compliance, especially for legged locomotion and robot applications in direct contact with a human. This paper describes the design of the second generation PPAM, for which specifically the membrane layout has been changed. In function of this new layout the mathematical model, developed for the first prototype, has been reformulated. This paper gives an elaborate discussion on this mathematical model which represents the force generation and enclosed muscle volume. Static load tests on some real muscles, which have been carried out in order to validate the mathematical model, are then discussed. Furthermore are given two robotic applications which currently use these pneumatic artificial muscles. One is the biped Lucy and the another one is a manipulator application which works in direct contact with an operator.",
"title": ""
},
{
"docid": "5562bb6fdc8864a23e7ec7992c7bb023",
"text": "Bacteria are known to communicate primarily via secreted extracellular factors. Here we identify a previously uncharacterized type of bacterial communication mediated by nanotubes that bridge neighboring cells. Using Bacillus subtilis as a model organism, we visualized transfer of cytoplasmic fluorescent molecules between adjacent cells. Additionally, by coculturing strains harboring different antibiotic resistance genes, we demonstrated that molecular exchange enables cells to transiently acquire nonhereditary resistance. Furthermore, nonconjugative plasmids could be transferred from one cell to another, thereby conferring hereditary features to recipient cells. Electron microscopy revealed the existence of variously sized tubular extensions bridging neighboring cells, serving as a route for exchange of intracellular molecules. These nanotubes also formed in an interspecies manner, between B. subtilis and Staphylococcus aureus, and even between B. subtilis and the evolutionary distant bacterium Escherichia coli. We propose that nanotubes represent a major form of bacterial communication in nature, providing a network for exchange of cellular molecules within and between species.",
"title": ""
},
{
"docid": "a9d93cb2c0d6d76a8597bcd64ecd00ba",
"text": "Hospital-based nurses (N = 832) and doctors (N = 603) in northern and eastern Spain completed a survey of job burnout, areas of work life, and management issues. Analysis of the results provides support for a mediation model of burnout that depicts employees’ energy, involvement, and efficacy as intermediary experiences between their experiences of work life and their evaluations of organizational change. The key element of this model is its focus on employees’ capacity to influence their work environments toward greater conformity with their core values. The model considers 3 aspects of that capacity: decision-making participation, organizational justice, and supervisory relationships. The analysis supports this model and emphasizes a central role for first-line supervisors in employees’ experiences of work life.jasp_563 57..75",
"title": ""
},
{
"docid": "20e09739910e5f3e7e721937b3464b6c",
"text": "The Andes system demonstrates that student learning can be significantly increased by upgrading only their homework problem-solving support. Although Andes is called an intelligent tutoring system, it actually replaces only the students' pencil and paper as they do problem-solving homework. Students do the same problems as before, study the same textbook, and attend the same lectures, labs and recitations. Five years of experimentation at the United States Naval Academy indicates that Andes significantly improves student learning. Andes' key feature appears to be the grain-size of interaction. Whereas most tutoring systems have students enter only the answer to a problem, Andes has students enter a whole derivation, which may consist of many steps, such as drawing vectors, drawing coordinate systems, defining variables and writing equations. Andes gives feedback after each step. When the student asks for help in the middle of problem-solving, Andes gives hints on what's wrong with an incorrect step or on what kind of step to do next. Thus, the grain size of Andes' interaction is a single step in solving the problem, whereas the grain size of a typical tutoring system's interaction is the answer to the problem. This report is a comprehensive description of Andes. It describes Andes' pedagogical principles and features, the system design and implementation, the evaluations of pedagogical effectiveness, and our plans for dissemination.",
"title": ""
},
{
"docid": "341e3832bf751688a9deabdfb5687f69",
"text": "The NINCDS-ADRDA and the DSM-IV-TR criteria for Alzheimer's disease (AD) are the prevailing diagnostic standards in research; however, they have now fallen behind the unprecedented growth of scientific knowledge. Distinctive and reliable biomarkers of AD are now available through structural MRI, molecular neuroimaging with PET, and cerebrospinal fluid analyses. This progress provides the impetus for our proposal of revised diagnostic criteria for AD. Our framework was developed to capture both the earliest stages, before full-blown dementia, as well as the full spectrum of the illness. These new criteria are centred on a clinical core of early and significant episodic memory impairment. They stipulate that there must also be at least one or more abnormal biomarkers among structural neuroimaging with MRI, molecular neuroimaging with PET, and cerebrospinal fluid analysis of amyloid beta or tau proteins. The timeliness of these criteria is highlighted by the many drugs in development that are directed at changing pathogenesis, particularly at the production and clearance of amyloid beta as well as at the hyperphosphorylation state of tau. Validation studies in existing and prospective cohorts are needed to advance these criteria and optimise their sensitivity, specificity, and accuracy.",
"title": ""
},
{
"docid": "4af5aa24efc82a8e66deb98f224cd033",
"text": "Abstract—In the recent years, the rapid spread of mobile device has create the vast amount of mobile data. However, some shallow-structure models such as support vector machine (SVM) have difficulty dealing with high dimensional data with the development of mobile network. In this paper, we analyze mobile data to predict human trajectories in order to understand human mobility pattern via a deep-structure model called “DeepSpace”. To the best of out knowledge, it is the first time that the deep learning approach is applied to predicting human trajectories. Furthermore, we develop the vanilla convolutional neural network (CNN) to be an online learning system, which can deal with the continuous mobile data stream. In general, “DeepSpace” consists of two different prediction models corresponding to different scales in space (the coarse prediction model and fine prediction models). This two models constitute a hierarchical structure, which enable the whole architecture to be run in parallel. Finally, we test our model based on the data usage detail records (UDRs) from the mobile cellular network in a city of southeastern China, instead of the call detail records (CDRs) which are widely used by others as usual. The experiment results show that “DeepSpace” is promising in human trajectories prediction.",
"title": ""
},
{
"docid": "6ba2aed7930d4c7fee807a0f4904ddc5",
"text": "This work is released in biometric field and has as goal, development of a full automatic fingerprint identification system based on support vector machine. Promising Results of first experiences pushed us to develop codification and recognition algorithms which are specifically associated to this system. In this context, works were consecrated on algorithm developing of the original image processing, minutiae and singular points localization; Gabor filters coding and testing these algorithms on well known databases which are: FVC2004 databases & FingerCell database. Performance Evaluating has proved that SVM achieved a good recognition rate in comparing with results obtained using a classic neural network RBF. Keywords—Biometry, Core and Delta points Detection, Gabor filters coding, Image processing and Support vector machine.",
"title": ""
},
{
"docid": "54ceed51f750eadda3038b42eb9977a5",
"text": "Starting from the revolutionary Retinex by Land and McCann, several further perceptually inspired color correction models have been developed with different aims, e.g. reproduction of color sensation, robust features recognition, enhancement of color images. Such models have a differential, spatially-variant and non-linear nature and they can coarsely be distinguished between white-patch (WP) and gray-world (GW) algorithms. In this paper we show that the combination of a pure WP algorithm (RSR: random spray Retinex) and an essentially GW one (ACE) leads to a more robust and better performing model (RACE). The choice of RSR and ACE follows from the recent identification of a unified spatially-variant approach for both algorithms. Mathematically, the originally distinct non-linear and differential mechanisms of RSR and ACE have been fused using the spray technique and local average operations. The investigation of RACE allowed us to put in evidence a common drawback of differential models: corruption of uniform image areas. To overcome this intrinsic defect, we devised a local and global contrast-based and image-driven regulation mechanism that has a general applicability to perceptually inspired color correction algorithms. Tests, comparisons and discussions are presented.",
"title": ""
},
{
"docid": "1c63438d58ef3817ce9b637bddc57fc1",
"text": "Object recognition strategies are increasingly based on regional descriptors such as SIFT or HOG at a sparse set of points or on a dense grid of points. Despite their success on databases such as PASCAL and CALTECH, the capability of such a representation in capturing the essential object content of the image is not well-understood: How large is the equivalence class of images sharing the same HOG descriptor? Are all these images from the same object category, and if not, do the non-category images resemble random images which cannot generically arise from imaged scenes? How frequently do images from two categories share the same HOG-based representation? These questions are increasingly more relevant as very large databases such as ImageNet and LabelMe are being developed where the current object recognition strategies show limited success. We examine these questions by introducing the metameric class of moments of HOG which allows for a target image to be morphed into an impostor image sharing the HOG representation of a source image while retaining the initial visual appearance. We report that two distinct images can be made to share the same HOG representation when the overlap between HOG patches is minimal, and the success of this method falls with increasing overlap. This paper is therefore a step in the direction of developing a sampling theorem for representing images by HOG features.",
"title": ""
},
{
"docid": "2a4360b7031aa9c191a81b1b14307db9",
"text": "Wireless body area network (BAN) is a promising technology for real-time monitoring of physiological signals to support medical applications. In order to ensure the trustworthy and reliable gathering of patient's critical health information, it is essential to provide node authentication service in a BAN, which prevents an attacker from impersonation and false data/command injection. Although quite fundamental, the authentication in BAN still remains a challenging issue. On one hand, traditional authentication solutions depend on prior trust among nodes whose establishment would require either key pre-distribution or non-intuitive participation by inexperienced users, while they are vulnerable to key compromise. On the other hand, most existing non-cryptographic authentication schemes require advanced hardware capabilities or significant modifications to the system software, which are impractical for BANs.\n In this paper, for the first time, we propose a lightweight body area network authentication scheme (BANA) that does not depend on prior-trust among the nodes and can be efficiently realized on commercial off-the-shelf low-end sensor devices. This is achieved by exploiting physical layer characteristics unique to a BAN, namely, the distinct received signal strength (RSS) variation behaviors between an on-body communication channel and an off-body channel. Our main finding is that the latter is more unpredictable over time, especially under various body motion scenarios. This unique channel characteristic naturally arises from the multi-path environment surrounding a BAN, and cannot be easily forged by attackers. We then adopt clustering analysis to differentiate the signals from an attacker and a legitimate node. The effectiveness of BANA is validated through extensive real-world experiments under various scenarios. It is shown that BANA can accurately identify multiple attackers with minimal amount of overhead.",
"title": ""
},
{
"docid": "acbac38a7de49bf1b6ad15abb007b601",
"text": "Our everyday environments are gradually becoming intelligent, facilitated both by technological development and user activities. Although large-scale intelligent environments are still rare in actual everyday use, they have been studied for quite a long time, and several user studies have been carried out. In this paper, we present a user-centric view of intelligent environments based on published research results and our own experiences from user studies with concepts and prototypes. We analyze user acceptance and users’ expectations that affect users’ willingness to start using intelligent environments and to continue using them. We discuss user experience of interacting with intelligent environments where physical and virtual elements are intertwined. Finally, we touch on the role of users in shaping their own intelligent environments instead of just using ready-made environments. People are not merely “using” the intelligent environments but they live in them, and they experience the environments via embedded services and new interaction tools as well as the physical and social environment. Intelligent environments should provide emotional as well as instrumental value to the people who live in them, and the environments should be trustworthy and controllable both by regular users and occasional visitors. Understanding user expectations and user experience in intelligent environments, OPEN ACCESS",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "bf5f3aedb8eadc7c9b12b6d670f93c49",
"text": "Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in of acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.",
"title": ""
},
{
"docid": "f8005e53658743d70abdf8f6dcb78819",
"text": "We present a novel approach to visually locate bodies of research within the sciences, both at each moment of time and dynamically. This article describes how this approach fits with other efforts to locally and globally map scientific outputs. We then show how these science overlay maps help benchmark, explore collaborations, and track temporal changes, using examples of universities, corporations, funding agencies, and research topics. We address their conditions of application and discuss advantages, downsides and limitations. Overlay maps especially help investigate the increasing number of scientific developments and organisations that do not fit within traditional disciplinary categories. We make these tools available (at the Internet) to enable researchers to explore the ongoing socio-cognitive transformations of science and technology systems.",
"title": ""
},
{
"docid": "bf05dca7c0ac521045794c90c91eba9d",
"text": "The optimization and analysis of new waveguide polarizers have been carried out on the basis of rigorous full-wave model. These polarizers transform the dominant mode of input rectangular waveguide into an elliptically polarized wave of output square waveguide. The phase-shifting module is realized on the basis of one or two sections of a square waveguide having two diagonally placed square ridges. It has been found out that polarizers with single-section phase shifter can provide the bandwidth from 11% to 15% at the axial ratio level of r < 2 dB and the return loss level of LR > 20 dB, whereas the two-section ones have the bandwidths more 23% at r < 1 dB and LR > 23 dB",
"title": ""
},
{
"docid": "8d957e6c626855a06ac2256c4e7cd15c",
"text": "This article presents a robotic dataset collected from the largest underground copper mine in the world. The sensor measurements from a 3D scanning lidar, a 2D radar, and stereo cameras were recorded from an approximately two kilometer traverse of a production-active tunnel. The equipment used and the data collection process is discussed in detail, along with the format of the data. This dataset is suitable for research in robotic navigation, as well as simultaneous localization and mapping. The download instructions are available at the following website http://dataset.amtc.cl.",
"title": ""
},
{
"docid": "188e52b27ae465c4785f0e1811c3014a",
"text": "High-voltage p-channel 4H-SiC insulated gate bipolar transistors (IGBTs) have been fabricated and characterized. The devices have a forward voltage drop of 7.2 V at 100 A/cm2 and a -16 V gate bias at 25degC, corresponding to a specific on-resistance of 72 mOmega ldr cm2 and a differential on-resistance of 26 mmOmega ldr cm2. Hole mobility of 12 cm2/V ldr s in the inversion channel with a threshold voltage of -6 V was achieved by optimizing the n+ well doping profile and gate oxidation process. A novel current enhancement layer was adopted to reduce the JFET resistance and enhance conductivity modulation by improving hole current spreading and suppressing the electron current conduction through the top n-p-n transistor. Inductive switching results have shown that the p-IGBT exhibited a turn-off time of ~1 mus and a turn-off energy loss of 12 m J at 4-kV dc-link voltage and 6-A load current at 25degC. The turn-off trajectory from the measured inductive load switching waveforms and numerical simulations shows that the p-IGBT had a near-square reverse bias safe operating area. Numerical simulations have been conducted to achieve an improved tradeoff between forward voltage drop and switching off energy by investigating the effects of drift layer lifetime and p-buffer layer parameters. The advantages of SiC p-IGBTs, such as the potential of very low ON-state resistance, slightly positive temperature coefficient, high switching speed, small switching losses, and large safe operating area, make them suitable and attractive for high-power high-frequency applications.",
"title": ""
}
] |
scidocsrr
|
10a2ef7db2c68903bc4fbd07b4a600de
|
Online Affect Detection and Robot Behavior Adaptation for Intervention of Children With Autism
|
[
{
"docid": "0e8e72e35393fca6f334ae2909a4cc74",
"text": "High-functioning children with autism were compared with two control groups on measures of anxiety and social worries. Comparison control groups consisted of children with specific language impairment (SLI) and normally developing children. Each group consisted of 15 children between the ages of 8 and 12 years and were matched for age and gender. Children with autism were found to be most anxious on both measures. High anxiety subscale scores for the autism group were separation anxiety and obsessive-compulsive disorder. These findings are discussed within the context of theories of autism and anxiety in the general population of children. Suggestions for future research are made.",
"title": ""
},
{
"docid": "f1ef345686548b060b70ebc972d51b47",
"text": "Given the importance of implicit communication in human interactions, it would be valuable to have this capability in robotic systems wherein a robot can detect the motivations and emotions of the person it is working with. Recognizing affective states from physiological cues is an effective way of implementing implicit human–robot interaction. Several machine learning techniques have been successfully employed in affect-recognition to predict the affective state of an individual given a set of physiological features. However, a systematic comparison of the strengths and weaknesses of these methods has not yet been done. In this paper, we present a comparative study of four machine learning methods—K-Nearest Neighbor, Regression Tree (RT), Bayesian Network and Support Vector Machine (SVM) as applied to the domain of affect recognition using physiological signals. The results showed that SVM gave the best classification accuracy even though all the methods performed competitively. RT gave the next best classification accuracy and was the most space and time efficient.",
"title": ""
}
] |
[
{
"docid": "44b14f681f175027b22150c115d64c44",
"text": "Video segmentation has become an important and active research area with a large diversity of proposed approaches. Graph-based methods, enabling top-performance on recent benchmarks, consist of three essential components: 1. powerful features account for object appearance and motion similarities; 2. spatio-temporal neighborhoods of pixels or superpixels (the graph edges) are modeled using a combination of those features; 3. video segmentation is formulated as a graph partitioning problem. While a wide variety of features have been explored and various graph partition algorithms have been proposed, there is surprisingly little research on how to construct a graph to obtain the best video segmentation performance. This is the focus of our paper. We propose to combine features by means of a classifier, use calibrated classifier outputs as edge weights and define the graph topology by edge selection. By learning the graph (without changes to the graph partitioning method), we improve the results of the best performing video segmentation algorithm by 6% on the challenging VSB100 benchmark, while reducing its runtime by 55%, as the learnt graph is much sparser.",
"title": ""
},
{
"docid": "96423c77c714172e04d375b7ee1e9869",
"text": "This paper presents a body-fixed-sensor-based approach to assess potential sleep apnea patients. A trial involving 15 patients at a sleep unit was undertaken. Vibration sounds were acquired from an accelerometer sensor fixed with a noninvasive mounting on the suprasternal notch of subjects resting in supine position. Respiratory, cardiac, and snoring components were extracted by means of digital signal processing techniques. Mainly, the following biomedical parameters used in new sleep apnea diagnosis strategies were calculated: heart rate, heart rate variability, sympathetic and parasympathetic activity, respiratory rate, snoring rate, pitch associated with snores, and airflow indirect quantification. These parameters were compared to those obtained by means of polysomnography and an accurate microphone. Results demonstrated the feasibility of implementing an accelerometry-based portable device as a simple and cost-effective solution for contributing to the screening of sleep apnea-hypopnea syndrome and other breathing disorders.",
"title": ""
},
{
"docid": "2827e0d197b7f66c7f6ceb846c6aaa27",
"text": "The food industry is becoming more customer-oriented and needs faster response times to deal with food scandals and incidents. Good traceability systems help to minimize the production and distribution of unsafe or poor quality products, thereby minimizing the potential for bad publicity, liability, and recalls. The current food labelling system cannot guarantee that the food is authentic, good quality and safe. Therefore, traceability is applied as a tool to assist in the assurance of food safety and quality as well as to achieve consumer confidence. This paper presents comprehensive information about traceability with regards to safety and quality in the food supply chain. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "662ec285031306816814378e6e192782",
"text": "One task of heterogeneous face recognition is to match a near infrared (NIR) face image to a visible light (VIS) image. In practice, there are often a few pairwise NIR-VIS face images but it is easy to collect lots of VIS face images. Therefore, how to use these unpaired VIS images to improve the NIR-VIS recognition accuracy is an ongoing issue. This paper presents a deep TransfeR NIR-VIS heterogeneous facE recognition neTwork (TRIVET) for NIR-VIS face recognition. First, to utilize large numbers of unpaired VIS face images, we employ the deep convolutional neural network (CNN) with ordinal measures to learn discriminative models. The ordinal activation function (Max-Feature-Map) is used to select discriminative features and make the models robust and lighten. Second, we transfer these models to NIR-VIS domain by fine-tuning with two types of NIR-VIS triplet loss. The triplet loss not only reduces intra-class NIR-VIS variations but also augments the number of positive training sample pairs. It makes fine-tuning deep models on a small dataset possible. The proposed method achieves state-of-the-art recognition performance on the most challenging CASIA NIR-VIS 2.0 Face Database. It achieves a new record on rank-1 accuracy of 95.74% and verification rate of 91.03% at FAR=0.001. It cuts the error rate in comparison with the best accuracy [27] by 69%.",
"title": ""
},
{
"docid": "74290ff01b32423087ce0025625dc445",
"text": "niques is now the world champion computer program in the game of Contract Bridge. As reported in The New York Times and The Washington Post, this program—a new version of Great Game Products’ BRIDGE BARON program—won the Baron Barclay World Bridge Computer Challenge, an international competition hosted in July 1997 by the American Contract Bridge League. It is well known that the game tree search techniques used in computer programs for games such as Chess and Checkers work differently from how humans think about such games. In contrast, our new version of the BRIDGE BARON emulates the way in which a human might plan declarer play in Bridge by using an adaptation of hierarchical task network planning. This article gives an overview of the planning techniques that we have incorporated into the BRIDGE BARON and discusses what the program’s victory signifies for research on AI planning and game playing.",
"title": ""
},
{
"docid": "e7c97ff0a949f70b79fb7d6dea057126",
"text": "Most conventional document categorization methods require a large number of documents with labeled categories for training. These methods are hard to be applied in scenarios, such as scientific publications, where training data is expensive to obtain and categories could change over years and across domains. In this work, we propose UNEC, an unsupervised representation learning model that directly categories documents without the need of labeled training data. Specifically, we develop a novel cascade embedding approach. We first embed concepts, i.e., significant phrases mined from scientific publications, into continuous vectors, which capture concept semantics. Based on the concept similarity graph built from the concept embedding, we further embed concepts into a hidden category space, where the category information of concepts becomes explicit. Finally we categorize documents by jointly considering the category attribution of their concepts. Our experimental results show that UNEC significantly outperforms several strong baselines on a number of real scientific corpora, under both automatic and manual evaluation.",
"title": ""
},
{
"docid": "51165fba0bc57e99069caca5796398c7",
"text": "Reinforcement learning has achieved several successes in sequential decision problems. However, these methods require a large number of training iterations in complex environments. A standard paradigm to tackle this challenge is to extend reinforcement learning to handle function approximation with deep learning. Lack of interpretability and impossibility to introduce background knowledge limits their usability in many safety-critical real-world scenarios. In this paper, we study how to combine reinforcement learning and external knowledge. We derive a rule-based variant version of the Sarsa(λ) algorithm, which we call Sarsarb(λ), that augments data with complex knowledge and exploits similarities among states. We apply our method to a trading task from the Stock Market Environment. We show that the resulting algorithm leads to much better performance but also improves training speed compared to the Deep Qlearning (DQN) algorithm and the Deep Deterministic Policy Gradients (DDPG) algorithm.",
"title": ""
},
{
"docid": "17ba29c670e744d6e4f9e93ceb109410",
"text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.",
"title": ""
},
{
"docid": "26cc29177040461634929eb1fa13395d",
"text": "In this paper, we first characterize distributed real-time systems by the following two properties that have to be supported: best eflorl and leas2 suffering. Then, we propose a distributed real-time object model DRO which complies these properties. Based on the DRO model, we design an object oriented programming language DROL: an extension of C++ with the capa.bility of describing distributed real-time systems. The most eminent feature of DROL is that users can describe on meta level the semantics of message communications as a communication protocol with sending and receiving primitives. With this feature, we can construct a flexible distributed real-time system satisfying specifications which include timing constraints. We implement a runtime system of DROL on the ARTS kernel, and evaluate the efficiency of the prototype implementation as well as confirm the high expressive power of the language.",
"title": ""
},
{
"docid": "13177a7395eed80a77571bd02a962bc9",
"text": "Orexin-A and orexin-B are neuropeptides originally identified as endogenous ligands for two orphan G-protein-coupled receptors. Orexin neuropeptides (also known as hypocretins) are produced by a small group of neurons in the lateral hypothalamic and perifornical areas, a region classically implicated in the control of mammalian feeding behavior. Orexin neurons project throughout the central nervous system (CNS) to nuclei known to be important in the control of feeding, sleep-wakefulness, neuroendocrine homeostasis, and autonomic regulation. orexin mRNA expression is upregulated by fasting and insulin-induced hypoglycemia. C-fos expression in orexin neurons, an indicator of neuronal activation, is positively correlated with wakefulness and negatively correlated with rapid eye movement (REM) and non-REM sleep states. Intracerebroventricular administration of orexins has been shown to significantly increase food consumption, wakefulness, and locomotor activity in rodent models. Conversely, an orexin receptor antagonist inhibits food consumption. Targeted disruption of the orexin gene in mice produces a syndrome remarkably similar to human and canine narcolepsy, a sleep disorder characterized by excessive daytime sleepiness, cataplexy, and other pathological manifestations of the intrusion of REM sleep-related features into wakefulness. Furthermore, orexin knockout mice are hypophagic compared with weight and age-matched littermates, suggesting a role in modulating energy metabolism. These findings suggest that the orexin neuropeptide system plays a significant role in feeding and sleep-wakefulness regulation, possibly by coordinating the complex behavioral and physiologic responses of these complementary homeostatic functions.",
"title": ""
},
{
"docid": "0cd42818f21ada2a8a6c2ed7a0f078fe",
"text": "In perceiving objects we may synthesize conjunctions of separable features by directing attention serially to each item in turn (A. Treisman and G. Gelade, Cognitive Psychology, 1980, 12, 97136). This feature-integration theory predicts that when attention is diverted or overloaded, features may be wrongly recombined, giving rise to “illusory conjunctions.” The present paper confirms that illusory conjunctions are frequently experienced among unattended stimuli varying in color and shape, and that they occur also with size and solidity (outlined versus filled-in shapes). They are shown both in verbal recall and in simultaneous and successive matching tasks, making it unlikely that they depend on verbal labeling or on memory failure. They occur as often between stimuli differing on many features as between more similar stimuli, and spatial separation has little effect on their frequency. Each feature seems to be coded as an independent entity and to migrate, when attention is diverted, with few constraints from the other features of its source or destination.",
"title": ""
},
{
"docid": "853ef57bfa4af5edf4ee3c8a46e4b4f4",
"text": "Hidden properties of social media users, such as their ethnicity, gender, and location, are often reflected in their observed attributes, such as their first and last names. Furthermore, users who communicate with each other often have similar hidden properties. We propose an algorithm that exploits these insights to cluster the observed attributes of hundreds of millions of Twitter users. Attributes such as user names are grouped together if users with those names communicate with other similar users. We separately cluster millions of unique first names, last names, and userprovided locations. The efficacy of these clusters is then evaluated on a diverse set of classification tasks that predict hidden users properties such as ethnicity, geographic location, gender, language, and race, using only profile names and locations when appropriate. Our readily-replicable approach and publiclyreleased clusters are shown to be remarkably effective and versatile, substantially outperforming state-of-the-art approaches and human accuracy on each of the tasks studied.",
"title": ""
},
{
"docid": "77f60100af0c9556e5345ee1b04d8171",
"text": "SDNET2018 is an annotated image dataset for training, validation, and benchmarking of artificial intelligence based crack detection algorithms for concrete. SDNET2018 contains over 56,000 images of cracked and non-cracked concrete bridge decks, walls, and pavements. The dataset includes cracks as narrow as 0.06 mm and as wide as 25 mm. The dataset also includes images with a variety of obstructions, including shadows, surface roughness, scaling, edges, holes, and background debris. SDNET2018 will be useful for the continued development of concrete crack detection algorithms based on deep convolutional neural networks (DCNNs), which are a subject of continued research in the field of structural health monitoring. The authors present benchmark results for crack detection using SDNET2018 and a crack detection algorithm based on the AlexNet DCNN architecture. SDNET2018 is freely available at https://doi.org/10.15142/T3TD19.",
"title": ""
},
{
"docid": "e8f431676ed0a85cb09a6462303a3ec7",
"text": "This paper describes Champollion, a lexicon-based sentence aligner designed for robust alignment of potential noisy parallel text. Champollion increases the robustness of the alignment by assigning greater weights to less frequent translated words. Experiments on a manually aligned Chinese – English parallel corpus show that Champollion achieves high precision and recall on noisy data. Champollion can be easily ported to new language pairs. It’s freely available to the public.",
"title": ""
},
{
"docid": "e3b473dbff892af0175a73275c770f7d",
"text": "Spacecraft require all manner of both digital and analog circuits. Onboard digital systems are constructed almost exclusively from field-programmable gate array (FPGA) circuits providing numerous advantages over discrete design including high integration density, high reliability, fast turn-around design cycle time, lower mass, volume, and power consumption, and lower parts acquisition and flight qualification costs. Analog and mixed-signal circuits perform tasks ranging from housekeeping to signal conditioning and processing. These circuits are painstakingly designed and built using discrete components due to a lack of options for field-programmability. FPAA (Field-Programmable Analog Array) and FPMA (Field-Programmable Mixed-signal Array) parts exist [1] but not in radiation-tolerant technology and not necessarily in an architecture optimal for the design of analog circuits for spaceflight applications. This paper outlines an architecture proposed for an FPAA fabricated in an existing commercial digital CMOS process used to make radiation-tolerant antifuse-based FPGA devices. The primary concerns are the impact of the technology and the overall array architecture on the flexibility of programming, the bandwidth available for high-speed analog circuits, and the accuracy of the components for highperformance applications.",
"title": ""
},
{
"docid": "774df4733d98b781f32222cf843ec381",
"text": "This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a nonlinear transformation between the joint feature/label space distributions of the two domain Ps and Pt that can be estimated with optimal transport. We propose a solution of this problem that allows to recover an estimated target P t = (X, f(X)) by optimizing simultaneously the optimal coupling and f . We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.",
"title": ""
},
{
"docid": "b7a08eaeb69fa6206cb9aec9cc54f2c3",
"text": "This paper describes a computational pragmatic model which is geared towards providing helpful answers to modal and hypothetical questions. The work brings together elements from fonna l . semantic theories on modality m~d question answering, defines a wkler, pragmatically flavoured, notion of answerhood based on non-monotonic inference aod develops a notion of context, within which aspects of more cognitively oriented theories, such as Relevance Theory, can be accommodated. The model has been inlplemented. The research was fundexl by ESRC grant number R000231279.",
"title": ""
},
{
"docid": "ca905aef2477905783f7d18be841f99b",
"text": "PURPOSE\nHumans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit.\n\n\nMETHODS\nIn experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field.\n\n\nRESULTS\nPursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.",
"title": ""
},
{
"docid": "3907bddf6a56b96c4e474d46ddd04359",
"text": "The aim of this review is to discuss the accumulating evidence that suggests that grape extracts and purified grape polyphenols possess a diverse array of biological actions and may be beneficial in the prevention of some inflammatory-mediated diseases including cardiovascular disease. The active components from grape extracts, which include the grape seed, grape skin, and grape juice, that have been identified thus far include polyphenols such as resveratrol, phenolic acids, anthocyanins, and flavonoids. All possess potent antioxidant properties and have been shown to decrease low-density lipoprotein-cholesterol oxidation and platelet aggregation. These compounds also possess a range of additional cardioprotective and vasoprotective properties including antiatherosclerotic, antiarrhythmic, and vasorelaxation actions. Although not exclusive, antioxidant properties of grape polyphenols are likely to be central to their mechanism(s) of action, which also include cellular signaling mechanisms and interactions at the genomic level. This review discusses some of the evidence favoring the consumption of grape extracts rich in polyphenols in the prevention of cardiovascular disease. Consumption of grape and grape extracts and/or grape products such as red wine may be beneficial in preventing the development of chronic degenerative diseases such as cardiovascular disease.",
"title": ""
},
{
"docid": "024cc15c164656f90ade55bf3c391405",
"text": "Unmanned aerial vehicles (UAVs), also known as drones have many applications and they are a current trend across many industries. They can be used for delivery, sports, surveillance, professional photography, cinematography, military combat, natural disaster assistance, security, and the list grows every day. Programming opens an avenue to automate many processes of daily life and with the drone as aerial programmable eyes, security and surveillance can become more efficient and cost effective. At Barry University, parking is becoming an issue as the number of people visiting the school greatly outnumbers the convenient parking locations. This has caused a multitude of hazards in parking lots due to people illegally parking, as well as unregistered vehicles parking in reserved areas. In this paper, we explain how automated drone surveillance is utilized to detect unauthorized parking at Barry University. The automated process is incorporated into Java application and completed in three steps: collecting visual data, processing data automatically, and sending automated responses and queues to the operator of the system.",
"title": ""
}
] |
scidocsrr
|
80df194bf7f0aedd9a14fb55de2b3856
|
The Body and the Beautiful: Health, Attractiveness and Body Composition in Men’s and Women’s Bodies
|
[
{
"docid": "6210a0a93b97a12c2062ac78953f3bd1",
"text": "This article proposes a contextual-evolutionary theory of human mating strategies. Both men and women are hypothesized to have evolved distinct psychological mechanisms that underlie short-term and long-term strategies. Men and women confront different adaptive problems in short-term as opposed to long-term mating contexts. Consequently, different mate preferences become activated from their strategic repertoires. Nine key hypotheses and 22 predictions from Sexual Strategies Theory are outlined and tested empirically. Adaptive problems sensitive to context include sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment. Discussion summarizes 6 additional sources of behavioral data, outlines adaptive problems common to both sexes, and suggests additional contexts likely to cause shifts in mating strategy.",
"title": ""
}
] |
[
{
"docid": "dabbcd5d79b011b7d091ef3a471d9779",
"text": "This paper borrows ideas from social science to inform the design of novel \"sensing\" user-interfaces for computing technology. Specifically, we present five design challenges inspired by analysis of human-human communication that are mundanely addressed by traditional graphical user interface designs (GUIs). Although classic GUI conventions allow us to finesse these questions, recent research into innovative interaction techniques such as 'Ubiquitous Computing' and 'Tangible Interfaces' has begun to expose the interaction challenges and problems they pose. By making them explicit we open a discourse on how an approach similar to that used by social scientists in studying human-human interaction might inform the design of novel interaction mechanisms that can be used to handle human-computer communication accomplishments",
"title": ""
},
{
"docid": "9d2ec490b7efb23909abdbf5f209f508",
"text": "Terrestrial Laser scanner (TLS) has been widely used in our recent architectural heritage projects and huge quantity of point cloud data was gotten. In order to process the huge quantity of point cloud data effectively and reconstruct their 3D models, more effective methods should be developed based on existing automatic or semiautomatic point cloud processing algorithms. Here introduce a new algorithm for rapid extracting the pillar features of Chinese ancient buildings from their point cloud data, the algorithm has the least human interaction in the data processing and is more efficient to extract pillars from point cloud data than existing feature extracting algorithms. With this algorithm we identify the pillar features by dividing the point cloud into slices firstly, and then get the projective parameters of pillar objects in selected slices, the next compare the local projective parameters in adjacent slices, the next combine them to get the global parameters of the pillars and at last reconstruct the 3d pillar models.",
"title": ""
},
{
"docid": "bd3717bd46869b9be3153478cbd19f2a",
"text": "The study was conducted to assess the effectiveness of jasmine oil massage on labour pain during first stage of labour among 40 primigravida women. The study design adopted was true experimental approach with pre-test post-test control group design. The demographic Proforma were collected from the women by interview and Visual analogue scale was used to measure the level of labour pain in both the groups. Data obtained in these areas were analysed by descriptive and inferential statistics. A significant difference was found in the experimental group( t 9.869 , p<0.05) . A significant difference was found between experimental group and control group. cal",
"title": ""
},
{
"docid": "4bd123c2c44e703133e9a6093170db39",
"text": "This paper presents a single-phase cascaded H-bridge converter for a grid-connected photovoltaic (PV) application. The multilevel topology consists of several H-bridge cells connected in series, each one connected to a string of PV modules. The adopted control scheme permits the independent control of each dc-link voltage, enabling, in this way, the tracking of the maximum power point for each string of PV panels. Additionally, low-ripple sinusoidal-current waveforms are generated with almost unity power factor. The topology offers other advantages such as the operation at lower switching frequency or lower current ripple compared to standard two-level topologies. Simulation and experimental results are presented for different operating conditions.",
"title": ""
},
{
"docid": "e637dc1aee0632f61a29c8609187a98b",
"text": "Scene coordinate regression has become an essential part of current camera re-localization methods. Different versions, such as regression forests and deep learning methods, have been successfully applied to estimate the corresponding camera pose given a single input image. In this work, we propose to regress the scene coordinates pixel-wise for a given RGB image by using deep learning. Compared to the recent methods, which usually employ RANSAC to obtain a robust pose estimate from the established point correspondences, we propose to regress confidences of these correspondences, which allows us to immediately discard erroneous predictions and improve the initial pose estimates. Finally, the resulting confidences can be used to score initial pose hypothesis and aid in pose refinement, offering a generalized solution to solve this task.",
"title": ""
},
{
"docid": "7ce9ef05d3f4a92f6b187d7986b70be1",
"text": "With the growth in the consumer electronics industry, it is vital to develop an algorithm for ultrahigh definition products that is more effective and has lower time complexity. Image interpolation, which is based on an autoregressive model, has achieved significant improvements compared with the traditional algorithm with respect to image reconstruction, including a better peak signal-to-noise ratio (PSNR) and improved subjective visual quality of the reconstructed image. However, the time-consuming computation involved has become a bottleneck in those autoregressive algorithms. Because of the high time cost, image autoregressive-based interpolation algorithms are rarely used in industry for actual production. In this study, in order to meet the requirements of real-time reconstruction, we use diverse compute unified device architecture (CUDA) optimization strategies to make full use of the graphics processing unit (GPU) (NVIDIA Tesla K80), including a shared memory and register and multi-GPU optimization. To be more suitable for the GPU-parallel optimization, we modify the training window to obtain a more concise matrix operation. Experimental results show that, while maintaining a high PSNR and subjective visual quality and taking into account the I/O transfer time, our algorithm achieves a high speedup of 147.3 times for a Lena image and 174.8 times for a 720p video, compared to the original single-threaded C CPU code with -O2 compiling optimization.",
"title": ""
},
{
"docid": "a8d6a864092b3deb58be27f0f76b02c2",
"text": "High-quality word representations have been very successful in recent years at improving performance across a variety of NLP tasks. These word representations are the mappings of each word in the vocabulary to a real vector in the Euclidean space. Besides high performance on specific tasks, learned word representations have been shown to perform well on establishing linear relationships among words. The recently introduced skipgram model improved performance on unsupervised learning of word embeddings that contains rich syntactic and semantic word relations both in terms of accuracy and speed. Word embeddings that have been used frequently on English language, is not applied to Turkish yet. In this paper, we apply the skip-gram model to a large Turkish text corpus and measured the performance of them quantitatively with the \"question\" sets that we generated. The learned word embeddings and the question sets are publicly available at our website. Keywords—Word embeddings, Natural Language Processing, Deep Learning",
"title": ""
},
{
"docid": "67a3f92ab8c5a6379a30158bb9905276",
"text": "We present a compendium of recent and current projects that utilize crowdsourcing technologies for language studies, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior. While crowdsourcing has primarily been used for annotation in recent language studies, the results here demonstrate that far richer data may be generated in a range of linguistic disciplines from semantics to psycholinguistics. For these, we report a number of successful methods for evaluating data quality in the absence of a ‘correct’ response for any given data point.",
"title": ""
},
{
"docid": "41d32df9d58f9c38f75010c87c0c3327",
"text": "Evidence from many countries in recent years suggests that collateral values and recovery rates on corporate defaults can be volatile and, moreover, that they tend to go down just when the number of defaults goes up in economic downturns. This link between recovery rates and default rates has traditionally been neglected by credit risk models, as most of them focused on default risk and adopted static loss assumptions, treating the recovery rate either as a constant parameter or as a stochastic variable independent from the probability of default. This traditional focus on default analysis has been partly reversed by the recent significant increase in the number of studies dedicated to the subject of recovery rate estimation and the relationship between default and recovery rates. This paper presents a detailed review of the way credit risk models, developed during the last thirty years, treat the recovery rate and, more specifically, its relationship with the probability of default of an obligor. We also review the efforts by rating agencies to formally incorporate recovery ratings into their assessment of corporate loan and bond credit risk and the recent efforts by the Basel Committee on Banking Supervision to consider “downturn LGD” in their suggested requirements under Basel II. Recent empirical evidence concerning these issues and the latest data on high-yield bond and leverage loan defaults is also presented and discussed.",
"title": ""
},
{
"docid": "db36273a3669e1aeda1bf2c5ab751387",
"text": "Autonomous Ground Vehicles designed for dynamic environments require a reliable perception of the real world, in terms of obstacle presence, position and speed. In this paper we present a flexible technique to build, in real time, a dense voxel-based map from a 3D point cloud, able to: (1) discriminate between stationary and moving obstacles; (2) provide an approximation of the detected obstacle's absolute speed using the information of the vehicle's egomotion computed through a visual odometry approach. The point cloud is first sampled into a full 3D map based on voxels to preserve the tridimensional information; egomotion information allows computational efficiency in voxels creation; then voxels are processed using a flood fill approach to segment them into a clusters structure; finally, with the egomotion information, the obtained clusters are labeled as stationary or moving obstacles, and an estimation of their speed is provided. The algorithm runs in real time; it has been tested on one of VisLab's AGVs using a modified SGM-based stereo system as 3D data source.",
"title": ""
},
{
"docid": "01962e512740addbe5f444ed581ebb48",
"text": "We investigate how neural, encoder-decoder translation systems output target strings of appropriate lengths, finding that a collection of hidden units learns to explicitly implement this functionality.",
"title": ""
},
{
"docid": "262c11ab9f78e5b3f43a31ad22cf23c5",
"text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.",
"title": ""
},
{
"docid": "1a0ed30b64fa7f8d39a12acfcadfd763",
"text": "This letter presents a smart shelf configuration for radio frequency identification (RFID) application. The proposed shelf has an embedded leaking microstrip transmission line with extended ground plane. This structure, when connected to an RFID reader, allows detecting tagged objects in close proximity with proper field confinement to avoid undesired reading of neighboring shelves. The working frequency band covers simultaneously the three world assigned RFID subbands at ultrahigh frequency (UHF). The concept is explored by full-wave simulations and it is validated with thorough experimental tests.",
"title": ""
},
{
"docid": "ff8089430cdae3e733b06a7aa1b759b4",
"text": "We derive a model for consumer loan default and credit card expenditure. The default model is based on statistical models for discrete choice, in contrast to the usual procedure of linear discriminant analysis. The model is then extended to incorporate the default probability in a model of expected profit. The technique is applied to a large sample of applications and expenditure from a major credit card company. The nature of the data mandates the use of models of sample selection for estimation. The empirical model for expected profit produces an optimal acceptance rate for card applications which is far higher than the observed rate used by the credit card vendor based on the discriminant analysis. I am grateful to Terry Seaks for valuable comments on an earlier draft of this paper and to Jingbin Cao for his able research assistance. The provider of the data and support for this project has requested anonymity, so I must thank them as such. Their help and support are gratefully acknowledged. Participants in the applied econometrics workshop at New York University also provided useful commentary.",
"title": ""
},
{
"docid": "fb2287cb1c41441049288335f10fd473",
"text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly",
"title": ""
},
{
"docid": "92da117d31574246744173b339b0d055",
"text": "We present a method for gesture detection and localization based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at two temporal scales. Key to our technique is a training strategy which exploits i) careful initialization of individual modalities; and ii) gradual fusion of modalities from strongest to weakest cross-modality structure. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams.",
"title": ""
},
{
"docid": "bf294a4c3af59162b2f401e2cdcb060b",
"text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.",
"title": ""
},
{
"docid": "10318d39b3ad18779accbf29b2f00fcd",
"text": "Designing convolutional neural networks (CNN) models for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant effort has been dedicated to design and improve mobile models on all three dimensions, it is challenging to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated neural architecture search approach for designing resourceconstrained mobile CNN models. We propose to explicitly incorporate latency information into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike in previous work, where mobile latency is considered via another, often inaccurate proxy (e.g., FLOPS), in our experiments, we directly measure real-world inference latency by executing the model on a particular platform, e.g., Pixel phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that permits layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our model achieves 74.0% top-1 accuracy with 76ms latency on a Pixel phone, which is 1.5× faster than MobileNetV2 (Sandler et al. 2018) and 2.4× faster than NASNet (Zoph et al. 2018) with the same top-1 accuracy. On the COCO object detection task, our model family achieves both higher mAP quality and lower latency than MobileNets.",
"title": ""
},
{
"docid": "f6a9670544a784a5fc431746557473a3",
"text": "Massive multiple-input multiple-output (MIMO) systems are cellular networks where the base stations (BSs) are equipped with unconventionally many antennas, deployed on co-located or distributed arrays. Huge spatial degrees-of-freedom are achieved by coherent processing over these massive arrays, which provide strong signal gains, resilience to imperfect channel knowledge, and low interference. This comes at the price of more infrastructure; the hardware cost and circuit power consumption scale linearly/affinely with the number of BS antennas N. Hence, the key to cost-efficient deployment of large arrays is low-cost antenna branches with low circuit power, in contrast to today's conventional expensive and power-hungry BS antenna branches. Such low-cost transceivers are prone to hardware imperfections, but it has been conjectured that the huge degrees-of-freedom would bring robustness to such imperfections. We prove this claim for a generalized uplink system with multiplicative phase-drifts, additive distortion noise, and noise amplification. Specifically, we derive closed-form expressions for the user rates and a scaling law that shows how fast the hardware imperfections can increase with N while maintaining high rates. The connection between this scaling law and the power consumption of different transceiver circuits is rigorously exemplified. This reveals that one can make √N the circuit power increase as N, instead of linearly, by careful circuit-aware system design.",
"title": ""
},
{
"docid": "fa20b9427a8dcfd8db90e0a6eb5e7d8c",
"text": "Recent functional brain imaging studies suggest that object concepts may be represented, in part, by distributed networks of discrete cortical regions that parallel the organization of sensory and motor systems. In addition, different regions of the left lateral prefrontal cortex, and perhaps anterior temporal cortex, may have distinct roles in retrieving, maintaining and selecting semantic information.",
"title": ""
}
] |
scidocsrr
|
305b328755a9b446456c52a00c000c49
|
Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective
|
[
{
"docid": "9f635d570b827d68e057afcaadca791c",
"text": "Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections.",
"title": ""
},
{
"docid": "f550f06ab3d8a13e6ae30454bc2812ac",
"text": "Deep neural networks are powerful and popular learning models that achieve stateof-the-art pattern recognition performance on many computer vision, speech, and language processing tasks. However, these networks have also been shown susceptible to carefully crafted adversarial perturbations which force misclassification of the inputs. Adversarial examples enable adversaries to subvert the expected system behavior leading to undesired consequences and could pose a security risk when these systems are deployed in the real world. In this work, we focus on deep convolutional neural networks and demonstrate that adversaries can easily craft adversarial examples even without any internal knowledge of the target network. Our attacks treat the network as an oracle (blackbox) and only assume that the output of the network can be observed on the probed inputs. Our first attack is based on a simple idea of adding perturbation to a randomly selected single pixel or a small set of them. We then improve the effectiveness of this attack by carefully constructing a small set of pixels to perturb by using the idea of greedy local-search. Our proposed attacks also naturally extend to a stronger notion of misclassification. Our extensive experimental results illustrate that even these elementary attacks can reveal a deep neural network’s vulnerabilities. The simplicity and effectiveness of our proposed schemes mean that they could serve as a litmus test for designing robust networks.",
"title": ""
},
{
"docid": "11a69c06f21e505b3e05384536108325",
"text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"title": ""
}
] |
[
{
"docid": "883191185d4671164eb4f12f19eb47f3",
"text": "Lustre is a declarative, data-flow language, which is devoted to the specification of synchronous and real-time applications. It ensures efficient code generation and provides formal specification and verification facilities. A graphical tool dedicated to the development of critical embedded systems and often used by industries and professionals is SCADE (Safety Critical Application Development Environment). SCADE is a graphical environment based on the LUSTRE language and it allows the hierarchical definition of the system components and the automatic code generation. This research work is partially concerned with Lutess, a testing environment which automatically transforms formal specifications into test data generators.",
"title": ""
},
{
"docid": "15e440bc952db5b0ad71617e509770b9",
"text": "The task of recommending relevant scientific literature for a draft academic paper has recently received significant interest. In our effort to ease the discovery of scientific literature and augment scientific writing, we aim to improve the relevance of results based on a shallow semantic analysis of the source document and the potential documents to recommend. We investigate the utility of automatic argumentative and rhetorical annotation of documents for this purpose. Specifically, we integrate automatic Core Scientific Concepts (CoreSC) classification into a prototype context-based citation recommendation system and investigate its usefulness to the task. We frame citation recommendation as an information retrieval task and we use the categories of the annotation schemes to apply different weights to the similarity formula. Our results show interesting and consistent correlations between the type of citation and the type of sentence containing the relevant information.",
"title": ""
},
{
"docid": "8182c4d6995d3a385219990f0b1909fa",
"text": "Random forests are becoming increasingly popular in many scientific fields because they can cope with \"small n large p\" problems, complex interactions and even highly correlated predictor variables. Their variable importance measures have recently been suggested as screening tools for, e.g., gene expression studies. However, these variable importance measures show a bias towards correlated predictor variables. We identify two mechanisms responsible for this finding: (i) A preference for the selection of correlated predictors in the tree building process and (ii) an additional advantage for correlated predictor variables induced by the unconditional permutation scheme that is employed in the computation of the variable importance measure. Based on these considerations we develop a new, conditional permutation scheme for the computation of the variable importance measure. The resulting conditional variable importance reflects the true impact of each predictor variable more reliably than the original marginal approach.",
"title": ""
},
{
"docid": "b53ee86e671ea8db6f9f84c8c02c2b5b",
"text": "The accurate estimation of students’ grades in future courses is important as it can inform the selection of next term’s courses and create personalized degree pathways to facilitate successful and timely graduation. This paper presents future course grade predictions methods based on sparse linear and low-rank matrix factorization models that are specific to each course or student–course tuple. These methods identify the predictive subsets of prior courses on a course-by-course basis and better address problems associated with the not-missing-at-random nature of the student–course historical grade data. The methods were evaluated on a dataset obtained from the University of Minnesota, for two different departments with different characteristics. This evaluation showed that focusing on course-specific data improves the accuracy of grade prediction.",
"title": ""
},
{
"docid": "1efd6da40ac525921b63257d9a3990be",
"text": "Movie plot summaries are expected to reflect the genre of movies since many spectators read the plot summaries before deciding to watch a movie. In this study, we perform movie genre classification from plot summaries of movies using bidirectional LSTM (Bi-LSTM). We first divide each plot summary of a movie into sentences and assign the genre of corresponding movie to each sentence. Next, using the word representations of sentences, we train Bi-LSTM networks. We estimate the genres for each sentence separately. Since plot summaries generally contain multiple sentences, we use majority voting for the final decision by considering the posterior probabilities of genres assigned to sentences. Our results reflect that, training Bi-LSTM network after dividing the plot summaries into their sentences and fusing the predictions for individual sentences outperform training the network with the whole plot summaries with the limited amount of data. Moreover, employing Bi-LSTM performs better compared to basic Recurrent Neural Networks (RNNs) and Logistic Regression (LR) as a baseline.",
"title": ""
},
{
"docid": "3f9bcd99eac46264ee0920ddcc866d33",
"text": "The advent of easy to use blogging tools is increasing the number of bloggers leading to more diversity in the quality blogspace. The blog search technologies that help users to find “good” blogs are thus more and more important. This paper proposes a new algorithm called “EigenRumor” that scores each blog entry by weighting the hub and authority scores of the bloggers based on eigenvector calculations. This algorithm enables a higher score to be assigned to the blog entries submitted by a good blogger but not yet linked to by any other blogs based on acceptance of the blogger's prior work. General Terms Algorithms, Management, Experimentation",
"title": ""
},
{
"docid": "e897ab9c0f9f850582fbcb172aa8b904",
"text": "Facial expression recognition is in general a challenging problem, especially in the presence of weak expression. Most recently, deep neural networks have been emerging as a powerful tool for expression recognition. However, due to the lack of training samples, existing deep network-based methods cannot fully capture the critical and subtle details of weak expression, resulting in unsatisfactory results. In this paper, we propose Deeper Cascaded Peak-piloted Network (DCPN) for weak expression recognition. The technique of DCPN has three main aspects: (1) Peak-piloted feature transformation, which utilizes the peak expression (easy samples) to supervise the non-peak expression (hard samples) of the same type and subject; (2) the back-propagation algorithm is specially designed such that the intermediate-layer feature maps of non-peak expression are close to those of the corresponding peak expression; and (3) an novel integration training method, cascaded fine-tune, is proposed to prevent the network from overfitting. Experimental results on two popular facial expression databases, CK$$+$$ + and Oulu-CASIA, show the superiority of the proposed DCPN over state-of-the-art methods.",
"title": ""
},
{
"docid": "21f6ca062098c0dcf04fe8fadfc67285",
"text": "The Key study in this paper is to begin the investigation process with the initial forensic analysis in the segments of the storage media which would definitely contain the digital forensic evidences. These Storage media Locations is referred as the Windows registry. Identifying the forensic evidence from windows registry may take less time than required in the case of all locations of a storage media. Our main focus in this research will be to study the registry structure of Windows 7 and identify the useful information within the registry keys of windows 7 that may be extremely useful to carry out any task of digital forensic analysis. The main aim is to describe the importance of the study on computer & digital forensics. The Idea behind the research is to implement a forensic tool which will be very useful in extracting the digital evidences and present them in usable form to a forensic investigator. The work includes identifying various events registry keys value such as machine last shut down time along with machine name, List of all the wireless networks that the computer has connected to; List of the most recently used files or applications, List of all the USB devices that have been attached to the computer and many more. This work aims to point out the importance of windows forensic analysis to extract and identify the hidden information which shall act as an evidence tool to track and gather the user activities pattern. All Research was conducted in a Windows 7 Environment. Keywords—Windows Registry, Windows 7 Forensic Analysis, Windows Registry Structure, Analysing Registry Key, Digital Forensic Identification, Forensic data Collection, Examination of Windows Registry, Decoding of Windows Registry Keys, Discovering User Activities Patterns, Computer Forensic Investigation Tool.",
"title": ""
},
{
"docid": "15b2279d218f0df5d496479644620846",
"text": "Despite the proliferation of banking services, lending to industry and the public still constitutes the core of the income of commercial banks and other lending institutions in developed as well as post-transition countries. From the technical perspective, the lending process in general is a relatively straightforward series of actions involving two principal parties. These activities range from the initial loan application to the successful or unsuccessful repayment of the loan. Although retail lending belongs among the most profitable investments in lenders’ asset portfolios (at least in developed countries), increases in the amounts of loans also bring increases in the number of defaulted loans, i.e. loans that either are not repaid at all or cases in which the borrower has problems with paying debts. Thus, the primary problem of any lender is to differentiate between “good” and “bad” debtors prior to granting credit. Such differentiation is possible by using a credit-scoring method. The goal of this paper is to review credit-scoring methods and elaborate on their efficiency based on the examples from the applied research. Emphasis is placed on credit scoring related to retail loans. We survey the methods which are suitable for credit scoring in the retail segment. We focus on retail loans as sharp increase in the amounts of loans for this clientele has been recorded in the last few years and another increase can be expected. This dynamic is highly relevant for post-transition countries. In the last few years, banks in the Czech and Slovak Republics have allocated a significant part of their lending to retail clientele. In 2004 alone, Czech and Slovak banks recorded 33.8% and 36.7% increases in retail loans, respectively. Hilbers et al. (2005) review trends in bank lending to the private sector, with a particular focus on Central and Eastern European countries, and find that rapid growth of private sector credit continues to be a key challenge for most of these countries. In the Czech and Slovak Republics the financial liabilities of households formed 11 % and 9 %",
"title": ""
},
{
"docid": "c4ccb674a07ba15417f09b81c1255ba8",
"text": "Real world environments are characterized by high levels of linguistic and numerical uncertainties. A Fuzzy Logic System (FLS) is recognized as an adequate methodology to handle the uncertainties and imprecision available in real world environments and applications. Since the invention of fuzzy logic, it has been applied with great success to numerous real world applications such as washing machines, food processors, battery chargers, electrical vehicles, and several other domestic and industrial appliances. The first generation of FLSs were type-1 FLSs in which type-1 fuzzy sets were employed. Later, it was found that using type-2 FLSs can enable the handling of higher levels of uncertainties. Recent works have shown that interval type-2 FLSs can outperform type-1 FLSs in the applications which encompass high uncertainty levels. However, the majority of interval type-2 FLSs handle the linguistic and input numerical uncertainties using singleton interval type-2 FLSs that mix the numerical and linguistic uncertainties to be handled only by the linguistic labels type-2 fuzzy sets. This ignores the fact that if input numerical uncertainties were present, they should affect the incoming inputs to the FLS. Even in the papers that employed non-singleton type-2 FLSs, the input signals were assumed to have a predefined shape (mostly Gaussian or triangular) which might not reflect the real uncertainty distribution which can vary with the associated measurement. In this paper, we will present a new approach which is based on an adaptive non-singleton interval type-2 FLS where the numerical uncertainties will be modeled and handled by non-singleton type-2 fuzzy inputs and the linguistic uncertainties will be handled by interval type-2 fuzzy sets to represent the antecedents’ linguistic labels. The non-singleton type-2 fuzzy inputs are dynamic and they are automatically generated from data and they do not assume a specific shape about the distribution associated with the given sensor. We will present several real world experiments using a real world robot which will show how the proposed type-2 non-singleton type-2 FLS will produce a superior performance to its singleton type-1 and type-2 counterparts when encountering high levels of uncertainties.",
"title": ""
},
{
"docid": "99ec846ba77110a1af12845cafdf115c",
"text": "Planning information security investment is somewhere between art and science. This paper reviews and compares existing scientific approaches and discusses the relation between security investment models and security metrics. To structure the exposition, the high-level security production function is decomposed into two steps: cost of security is mapped to a security level, which is then mapped to benefits. This allows to structure data sources and metrics, to rethink the notion of security productivity, and to distinguish sources of indeterminacy as measurement error and attacker behavior. It is further argued that recently proposed investment models, which try to capture more features specific to information security, should be used for all strategic security investment decisions beneath defining the overall security budget.",
"title": ""
},
{
"docid": "6acd1583b23a65589992c3297250a603",
"text": "Trichostasis spinulosa (TS) is a common but rarely diagnosed disease. For diagnosis, it's sufficient to see a bundle of vellus hair located in a keratinous sheath microscopically. In order to obtain these vellus hair settled in comedone-like openings, Standard skin surface biopsy (SSSB), a non-invasive method was chosen. It's aimed to remind the differential diagnosis of TS in treatment-resistant open comedone-like lesions and discuss the SSSB method in diagnosis. A 25-year-old female patient was admitted with a complaint of the black spots located on bilateral cheeks and nose for 12 years. In SSSB, multiple vellus hair bundles in funnel-shaped structures were observed under the microscope, and a diagnosis of 'TS' was made. After six weeks of treatment with tretinoin 0.025% and 4% erythromycin jel topically, the appearance of black macules was significantly reduced. Treatment had to be terminated due to her pregnancy, and the lesions recurred within 1 month. It's believed that TS should be considered in the differential diagnosis of treatment-resistant open comedone-like lesions, and SSSB might be an inexpensive and effective alternative method for the diagnosis of TS.",
"title": ""
},
{
"docid": "3afea784f4a9eb635d444a503266d7cd",
"text": "Gallium nitride high-electron mobility transistors (GaN HEMTs) have attractive properties, low on-resistances and fast switching speeds. This paper presents the characteristics of a normally-on GaN HEMT that we fabricated. Further, the circuit operation of a Class-E amplifier is analyzed. Experimental results demonstrate the excellent performance of the gate drive circuit for the normally-on GaN HEMT and the 13.56MHz radio frequency (RF) power amplifier.",
"title": ""
},
{
"docid": "47a484d75b1635139f899d2e1875d8f4",
"text": "This work presents the concept and methodology as well as the architecture and physical implementation of an integrated node for smart-city applications. The presented integrated node lies on active RFID technology whereas the use case illustrated, with results from a small-scale verification of the presented node, refers to common-type waste-bins. The sensing units deployed for the use case are ultrasonic sensors that provide ranging information which is translated to fill-level estimations; however the use of a versatile active RFID tag within the node is able to afford multiple sensors for a variety of smart-city applications. The most important benefits of the presented node are power minimization, utilization of low-cost components and accurate fill-level estimation with a tiny data-load fingerprint, regarding the specific use case on waste-bins, whereas the node has to be deployed on public means of transportation or similar standard route vehicles within an urban or suburban context.",
"title": ""
},
{
"docid": "352dbf516ba3cde1f1398cb5d75a76c1",
"text": "We are building a `virtual-world' of a real world seabed for its visual analysis. Sub-bottom profile is imported in the 3D environment. “section-drilling” three-dimensional model is designed according to the characteristics of the multi-source comprehensive data under the seabed. In this model, the seabed stratigraphic profile obtained by seismic reflection is digitized into discrete points and interpolated with an approved Kriging arithmetic to produce uniform grid in every strata layer. The Delaunay triangular model is then constructed in every layer and calibrated using the drilling data to rectify the depth value of the dataset within the buffer. Finally, the constructed 3D seabed stratigraphic model is rendered in every layer by GPU shader engine. Based on this model, two state-of-the-art applications on website explorer and smartphone prove its ubiquitous feature. The resulting `3D Seabed' is used for simulation, visualization, and analysis, by a set of interlinked, real-time layers of information about the 3D Seabed and its analysis result.",
"title": ""
},
{
"docid": "02321829f5adaec4811e15b9d46dc597",
"text": "\"Be careful what you wish for, you just might get it.\" - Proverb In 2005, computing education was experiencing a crisis. Enrollments had \"fallen to such an extent that some academic computing programs were facing significant reductions in staffing levels or even elimination\". The community responded, with panels to investigate and highlight ways to infuse \"passion, beauty, joy and awe\" into the introductory experiences, the CS10K project to bring computing to 10,000 teachers and 100,000 students, and better messaging of career opportunities, to name a few of the initiatives to bring students back into our seats.\n Well, by golly, it worked! It certainly didn't hurt our cause that Wall Street almost collapsed, young whiz kids were becoming TECH billionaires, an inspiring video and an interactive website led millions of people to code for an hour every December, or smart devices put computing into the hands of young people, and social media became the killer app. Whatever it was, CS became hot again. And we mean HOT. There are now several institutions around the world that have well over a thousand students taking CS1 in the Fall of 2015. There's just so much lemonade one can make before the seams start to burst, and the wheels come off the bus, as many shared at SIGCSE 2015 at the Birds of the Feather session.\n The goal of this panel is to bring together educators who were charged with delivering face-to-face CS1 on the grandest scale the field has ever seen. How did they cope? Does it become all people management with an army of Teaching Assistants? What were the differences and common themes in their survival plans? What is working? What mistakes were made? How are they supporting differential learning for the students who don't have the same experience as others? How is diversity being affected? Finally, what advice would they have for others interested in venturing into the tsunami, and broaden participation at a massive scale?",
"title": ""
},
{
"docid": "7a84328148fac2738d8954976b09aa45",
"text": "The region was covered by 1:250 000 mapping by the Geological Survey of Canada during the mid 1940s (Lord, 1948). A number of showings were found. One of these, the Marmot, was the focus of the first modern exploration (1960s) in the general area. At the same time there was significant exploration activity for porphyry copper and molybdenum mineralization in the intrusive belt running north and south through the McConnell Range. A large gossan was discovered in 1966 at the present site of the Kemess North prospect and led to similar exploration on nearby ground. Falconbridge Nickel Ltd., during a reconnaissance helicopter flight in 1971, discovered a malachite-stained bed in the Sustut drainage that was traceable for over 2500 feet. Their assessment suggested a replacement copper deposi t hosted by volcaniclastic rocks in the upper part of the Takla Group. Numerous junior and major resource companies acquired ground in the area. In 1972 copper was found on the Willow cliffs on the opposite side of the Sustut River and a porphyry style target was identified at the Day. In 1973 the B.C. Geological Survey conducted a mineral deposit study of the Sustut copper area (Church, 1974a). The Geological Survey of Canada returned to pursue general and detailed studies within the McConnell sheet (Richards 1976, and Monger 1977). Monger and Church (1976) revised the stratigraphic nomenclature based on breaks and lithological changes in the volcanic succession supported by fossil data and field observations. In 1983, follow up of a gold-copper-molybdenum soil anomaly led to the discovery of the Kemess South porphyry deposit.",
"title": ""
},
{
"docid": "627d5c8abee0b40c270b3de38ed84e80",
"text": "Patients with temporal lobe epilepsy (TLE) often display cognitive deficits. However, current epilepsy therapeutic interventions mainly aim at how to reduce the frequency and degree of epileptic seizures. Recovery of cognitive impairment is not attended enough, resulting in the lack of effective approaches in this respect. In the pilocarpine-induced temporal lobe epilepsy rat model, memory impairment has been classically reported. Here we evaluated spatial cognition changes at different epileptogenesis stages in rats of this model and explored the effects of long-term Mozart music exposure on the recovery of cognitive ability. Our results showed that pilocarpine rats suffered persisting cognitive impairment during epileptogenesis. Interestingly, we found that Mozart music exposure can significantly enhance cognitive ability in epileptic rats, and music intervention may be more effective for improving cognitive function during the early stages after Status epilepticus. These findings strongly suggest that Mozart music may help to promote the recovery of cognitive damage due to seizure activities, which provides a novel intervention strategy to diminish cognitive deficits in TLE patients.",
"title": ""
},
{
"docid": "80f6d8109c56b6573c3c0a9a3bc989f8",
"text": "In coded aperture imaging the attainable quality of the reconstructed images strongly depends on the choice of the aperture pattern. Optimum mask patterns can be designed from binary arrays with constant sidelobes of their periodic autocorrelation function, the so{called URAs. However, URAs exist for a restricted number of aperture sizes and open fractions only. Using a mismatched lter decoding scheme, artifact{free reconstructions can be obtained even if the aperture array violates the URA condition. A general expression and an upper bound for the signal{to{noise ratio as a function of the aperture array and the relative detector noise level are derived. Combinatorial optimization algorithms, such as the Great Deluge algorithm, are employed for the design of near{optimum aperture arrays. The signal{to{noise ratio of the reconstructions is predicted to be only slightly inferior to the URA case while no restrictions with respect to the aperture size or open fraction are imposed.",
"title": ""
}
] |
scidocsrr
|
35089915d9f374c0ceda5110b12bab24
|
History of cannabis as a medicine: a review.
|
[
{
"docid": "3392de7e3182420e882617f0baff389a",
"text": "BACKGROUND\nIndividuals who initiate cannabis use at an early age, when the brain is still developing, might be more vulnerable to lasting neuropsychological deficits than individuals who begin use later in life.\n\n\nMETHODS\nWe analyzed neuropsychological test results from 122 long-term heavy cannabis users and 87 comparison subjects with minimal cannabis exposure, all of whom had undergone a 28-day period of abstinence from cannabis, monitored by daily or every-other-day observed urine samples. We compared early-onset cannabis users with late-onset users and with controls, using linear regression controlling for age, sex, ethnicity, and attributes of family of origin.\n\n\nRESULTS\nThe 69 early-onset users (who began smoking before age 17) differed significantly from both the 53 late-onset users (who began smoking at age 17 or later) and from the 87 controls on several measures, most notably verbal IQ (VIQ). Few differences were found between late-onset users and controls on the test battery. However, when we adjusted for VIQ, virtually all differences between early-onset users and controls on test measures ceased to be significant.\n\n\nCONCLUSIONS\nEarly-onset cannabis users exhibit poorer cognitive performance than late-onset users or control subjects, especially in VIQ, but the cause of this difference cannot be determined from our data. The difference may reflect (1). innate differences between groups in cognitive ability, antedating first cannabis use; (2). an actual neurotoxic effect of cannabis on the developing brain; or (3). poorer learning of conventional cognitive skills by young cannabis users who have eschewed academics and diverged from the mainstream culture.",
"title": ""
}
] |
[
{
"docid": "ba452a03f619b7de7b37fe76bdb186e8",
"text": "Device variability is receiving a lot of interest recently due to its important impact on the design of digital integrated systems. In analog integrated circuits, the variability of identically designed devices has long been a concern since it directly affects the attainable precision. This paper reviews the mismatch device models that are widely used in analog design as well as the fundamental impact of device mismatch on the trade-off between different performance parameters.",
"title": ""
},
{
"docid": "4ae6afb7039936b2e6bcfc030fdb9cea",
"text": "Apart from being used as a means of entertainment, computer games have been adopted for a long time as a valuable tool for learning. Computer games can offer many learning benefits to students since they can consume their attention and increase their motivation and engagement which can then lead to stimulate learning. However, most of the research to date on educational computer games, in particular learning versions of existing computer games, focused only on learner with typical development. Rather less is known about designing educational games for learners with special needs. The current research presents the results of a pilot study. The principal aim of this pilot study is to examine the interest of learners with hearing impairments in using an educational game for learning the sign language notation system SignWriting. The results found indicated that, overall, the application is useful, enjoyable and easy to use: the game can stimulate the students’ interest in learning such notations.",
"title": ""
},
{
"docid": "e2ce393fade02f0dfd20b9aca25afd0f",
"text": "This paper presents a comparative lightning performance study conducted on a 275 kV double circuit shielded transmission line using two software programs, TFlash and Sigma-Slp. The line performance was investigated by using both a single stroke and a statistical performance analysis and considering cases of shielding failure and backflashover. A sensitivity analysis was carried out to determine the relationship between the flashover rate and the parameters influencing it. To improve the lightning performance of the line, metal oxide surge arresters were introduced using different phase and line locations. Optimised arrester arrangements are proposed.",
"title": ""
},
{
"docid": "39549cfe16eec5d4b083bf6a05c3d29f",
"text": "Recently, there has been increasing interest in learning semantic parsers with indirect supervision, but existing work focuses almost exclusively on question answering. Separately, there have been active pursuits in leveraging databases for distant supervision in information extraction, yet such methods are often limited to binary relations and none can handle nested events. In this paper, we generalize distant supervision to complex knowledge extraction, by proposing the first approach to learn a semantic parser for extracting nested event structures without annotated examples, using only a database of such complex events and unannotated text. The key idea is to model the annotations as latent variables, and incorporate a prior that favors semantic parses containing known events. Experiments on the GENIA event extraction dataset show that our approach can learn from and extract complex biological pathway events. Moreover, when supplied with just five example words per event type, it becomes competitive even among supervised systems, outperforming 19 out of 24 teams that participated in the original shared task.",
"title": ""
},
{
"docid": "b5788c52127d2ef06df428d758f1a225",
"text": "Conventional convolutional neural networks use either a linear or a nonlinear filter to extract features from an image patch (region) of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula> (typically, <inline-formula> <tex-math notation=\"LaTeX\">$ H $ </tex-math></inline-formula> is small and is equal to <inline-formula> <tex-math notation=\"LaTeX\">$ W$ </tex-math></inline-formula>, e.g., <inline-formula> <tex-math notation=\"LaTeX\">$ H $ </tex-math></inline-formula> is 5 or 7). Generally, the size of the filter is equal to the size <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula> of the input patch. We argue that the representational ability of equal-size strategy is not strong enough. To overcome the drawback, we propose to use subpatch filter whose spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ h\\times w $ </tex-math></inline-formula> is smaller than <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula>. The proposed subpatch filter consists of two subsequent filters. The first one is a linear filter of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ h\\times w $ </tex-math></inline-formula> and is aimed at extracting features from spatial domain. The second one is of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ 1\\times 1 $ </tex-math></inline-formula> and is used for strengthening the connection between different input feature channels and for reducing the number of parameters. The subpatch filter convolves with the input patch and the resulting network is called a subpatch network. Taking the output of one subpatch network as input, we further repeat constructing subpatch networks until the output contains only one neuron in spatial domain. These subpatch networks form a new network called the cascaded subpatch network (CSNet). The feature layer generated by CSNet is called the <italic>csconv</italic> layer. For the whole input image, we construct a deep neural network by stacking a sequence of <italic>csconv</italic> layers. Experimental results on five benchmark data sets demonstrate the effectiveness and compactness of the proposed CSNet. For example, our CSNet reaches a test error of 5.68% on the CIFAR10 data set without model averaging. To the best of our knowledge, this is the best result ever obtained on the CIFAR10 data set.",
"title": ""
},
{
"docid": "2bd15d743690c8bcacb0d01650759d62",
"text": "With the large amount of available data and the variety of features they offer, electronic health records (EHR) have gotten a lot of interest over recent years, and start to be widely used by the machine learning and bioinformatics communities. While typical numerical fields such as demographics, vitals, lab measurements, diagnoses and procedures, are natural to use in machine learning models, there is no consensus yet on how to use the free-text clinical notes. We show how embeddings can be learned from patients’ history of notes, at the word, note and patient level, using simple neural and sequence models. We show on various relevant evaluation tasks that these embeddings are easily transferable to smaller problems, where they enable accurate predictions using only clinical notes.",
"title": ""
},
{
"docid": "cc9ee1b5111974da999d8c52ba393856",
"text": "The back propagation (BP) neural network algorithm is a multi-layer feedforward network trained according to error back propagation algorithm and is one of the most widely applied neural network models. BP network can be used to learn and store a great deal of mapping relations of input-output model, and no need to disclose in advance the mathematical equation that describes these mapping relations. Its learning rule is to adopt the steepest descent method in which the back propagation is used to regulate the weight value and threshold value of the network to achieve the minimum error sum of square. This paper focuses on the analysis of the characteristics and mathematical theory of BP neural network and also points out the shortcomings of BP algorithm as well as several methods for improvement.",
"title": ""
},
{
"docid": "8c2b0e93eae23235335deacade9660f0",
"text": "We design and implement a simple zero-knowledge argument protocol for NP whose communication complexity is proportional to the square-root of the verification circuit size. The protocol can be based on any collision-resistant hash function. Alternatively, it can be made non-interactive in the random oracle model, yielding concretely efficient zk-SNARKs that do not require a trusted setup or public-key cryptography.\n Our protocol is attractive not only for very large verification circuits but also for moderately large circuits that arise in applications. For instance, for verifying a SHA-256 preimage in zero-knowledge with 2-40 soundness error, the communication complexity is roughly 44KB (or less than 34KB under a plausible conjecture), the prover running time is 140 ms, and the verifier running time is 62 ms. This proof is roughly 4 times shorter than a similar proof of ZKB++ (Chase et al., CCS 2017), an optimized variant of ZKBoo (Giacomelli et al., USENIX 2016).\n The communication complexity of our protocol is independent of the circuit structure and depends only on the number of gates. For 2-40 soundness error, the communication becomes smaller than the circuit size for circuits containing roughly 3 million gates or more. Our efficiency advantages become even bigger in an amortized setting, where several instances need to be proven simultaneously.\n Our zero-knowledge protocol is obtained by applying an optimized version of the general transformation of Ishai et al. (STOC 2007) to a variant of the protocol for secure multiparty computation of Damgard and Ishai (Crypto 2006). It can be viewed as a simple zero-knowledge interactive PCP based on \"interleaved\" Reed-Solomon codes.",
"title": ""
},
{
"docid": "f4963c41832024b8cd7d3480204275fa",
"text": "Almost surreptitiously, crowdsourcing has entered software engineering practice. In-house development, contracting, and outsourcing still dominate, but many development projects use crowdsourcing-for example, to squash bugs, test software, or gather alternative UI designs. Although the overall impact has been mundane so far, crowdsourcing could lead to fundamental, disruptive changes in how software is developed. Various crowdsourcing models have been applied to software development. Such changes offer exciting opportunities, but several challenges must be met for crowdsourcing software development to reach its potential.",
"title": ""
},
{
"docid": "1420ad48fdba30ac37b176007c3945fa",
"text": "Accurate and fast foreground object extraction is very important for object tracking and recognition in video surveillance. Although many background subtraction (BGS) methods have been proposed in the recent past, it is still regarded as a tough problem due to the variety of challenging situations that occur in real-world scenarios. In this paper, we explore this problem from a new perspective and propose a novel background subtraction framework with real-time semantic segmentation (RTSS). Our proposed framework consists of two components, a traditional BGS segmenter B and a real-time semantic segmenter S. The BGS segmenter B aims to construct background models and segments foreground objects. The realtime semantic segmenter S is used to refine the foreground segmentation outputs as feedbacks for improving the model updating accuracy. B and S work in parallel on two threads. For each input frame It, the BGS segmenter B computes a preliminary foreground/background (FG/BG) mask Bt. At the same time, the real-time semantic segmenter S extracts the object-level semantics St. Then, some specific rules are applied on Bt and St to generate the final detection Dt. Finally, the refined FG/BG mask Dt is fed back to update the background model. Comprehensive experiments evaluated on the CDnet 2014 dataset demonstrate that our proposed method achieves stateof-the-art performance among all unsupervised background subtraction methods while operating at real-time, and even performs better than some deep learning based supervised algorithms. In addition, our proposed framework is very flexible and has the potential for generalization.",
"title": ""
},
{
"docid": "ae497143f2c1b15623ab35b360d954e5",
"text": "With the popularity of social media (e.g., Facebook and Flicker), users could easily share their check-in records and photos during their trips. In view of the huge amount of check-in data and photos in social media, we intend to discover travel experiences to facilitate trip planning. Prior works have been elaborated on mining and ranking existing travel routes from check-in data. We observe that when planning a trip, users may have some keywords about preference on his/her trips. Moreover, a diverse set of travel routes is needed. To provide a diverse set of travel routes, we claim that more features of Places of Interests (POIs) should be extracted. Therefore, in this paper, we propose a Keyword-aware Skyline Travel Route (KSTR) framework that use knowledge extraction from historical mobility records and the user's social interactions. Explicitly, we model the \"Where, When, Who\" issues by featurizing the geographical mobility pattern, temporal influence and social influence. Then we propose a keyword extraction module to classify the POI-related tags automatically into different types, for effective matching with query keywords. We further design a route reconstruction algorithm to construct route candidates that fulfill the query inputs. To provide diverse query results, we explore Skyline concepts to rank routes. To evaluate the effectiveness and efficiency of the proposed algorithms, we have conducted extensive experiments on real location-based social network datasets, and the experimental results show that KSTR does indeed demonstrate good performance compared to state-of-the-art works.",
"title": ""
},
{
"docid": "b1f000790b6ff45bd9b0b7ba3aec9cb2",
"text": "Broad-scale destruction and fragmentation of native vegetation is a highly visible result of human land-use throughout the world (Chapter 4). From the Atlantic Forests of South America to the tropical forests of Southeast Asia, and in many other regions on Earth, much of the original vegetation now remains only as fragments amidst expanses of land committed to feeding and housing human beings. Destruction and fragmentation of habitats are major factors in the global decline of populations and species (Chapter 10), the modification of native plant and animal communities and the alteration of ecosystem processes (Chapter 3). Dealing with these changes is among the greatest challenges facing the “mission-orientated crisis discipline” of conservation biology (Soulé 1986; see Chapter 1). Habitat fragmentation, by definition, is the “breaking apart” of continuous habitat, such as tropical forest or semi-arid shrubland, into distinct pieces. When this occurs, three interrelated processes take place: a reduction in the total amount of the original vegetation (i.e. habitat loss); subdivision of the remaining vegetation into fragments, remnants or patches (i.e. habitat fragmentation); and introduction of new forms of land-use to replace vegetation that is lost. These three processes are closely intertwined such that it is often difficult to separate the relative effect of each on the species or community of concern. Indeed, many studies have not distinguished between these components, leading to concerns that “habitat fragmentation” is an ambiguous, or even meaningless, concept (Lindenmayer and Fischer 2006). Consequently, we use “landscape change” to refer to these combined processes and “habitat fragmentation” for issues directly associated with the subdivision of vegetation and its ecological consequences. This chapter begins by summarizing the conceptual approaches used to understand conservation in fragmented landscapes. We then examine the biophysical aspects of landscape change, and how such change affects species and communities, posing two main questions: (i) what are the implications for the patterns of occurrence of species and communities?; and (ii) how does landscape change affect processes that influence the distribution and viability of species and communities? The chapter concludes by identifying the kinds of actions that will enhance the conservation of biota in fragmented landscapes.",
"title": ""
},
{
"docid": "52e1c954aefca110d15c24d90de902b2",
"text": "Reinforcement learning (RL) agents can benefit from adaptive exploration/exploitation behavior, especially in dynamic environments. We focus on regulating this exploration/exploitation behavior by controlling the action-selection mechanism of RL. Inspired by psychological studies which show that affect influences human decision making, we use artificial affect to influence an agent’s action-selection. Two existing affective strategies are implemented and, in addition, a new hybrid method that combines both. These strategies are tested on ‘maze tasks’ in which a RL agent has to find food (rewarded location) in a maze. We use Soar-RL, the new RL-enabled version of Soar, as a model environment. One task tests the ability to quickly adapt to an environmental change, while the other tests the ability to escape a local optimum in order to find the global optimum. We show that artificial affect-controlled action-selection in some cases helps agents to faster adapt to changes in the environment.",
"title": ""
},
{
"docid": "73bbb7122b588761f1bf7b711f21a701",
"text": "This research attempts to find a new closed-form solution of toroid and overlapping windings for axial flux permanent magnet machines. The proposed solution includes analytical derivations for winding lengths, resistances, and inductances as functions of fundamental airgap flux density and inner-to-outer diameter ratio. Furthermore, phase back-EMFs, phase terminal voltages, and efficiencies are calculated and compared for both winding types. Finite element analysis is used to validate the accuracy of the proposed analytical calculations. The proposed solution should assist machine designers to ascertain benefits and limitations of toroid and overlapping winding types as well as to get faster results.",
"title": ""
},
{
"docid": "611eacd767f1ea709c1c4aca7acdfcdb",
"text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.",
"title": ""
},
{
"docid": "abdd0d2c13c884b22075b2c3f54a0dfc",
"text": "Global clock distribution for multi-GHz microprocessors has become increasingly difficult and time-consuming to design. As the frequency of the global clock continues to increase, the timing uncertainty introduced by the clock network − the skew and jitter − must reduce proportional to the clock period. However, the clock skew and jitter for conventional, buffered H-trees are proportional to latency, which has increased for recent generations of microprocessors. A global clock network that uses standing waves and coupled oscillators has the potential to significantly reduce both skew and jitter. Standing waves have the unique property that phase does not depend on position, meaning that there is ideally no skew. They have previously been used for board-level clock distribution, on coaxial cables, and on superconductive wires but have never been implemented on-chip due to the large losses of on-chip interconnects. Networks of coupled oscillators have a phase-averaging effect that reduces both skew and jitter. However, none of the previous implementations of coupled-oscillator clock networks use standing waves and some require considerable circuitry to couple the oscillators. In this thesis, a global clock network that incorporates standing waves and coupled oscillators to distribute a high-frequency clock signal with low skew and low jitter is",
"title": ""
},
{
"docid": "a0f46c67118b2efec2bce2ecd96d11d6",
"text": "This paper describes the implementation of a service to identify and geo-locate real world events that may be present as social activity signals in two different social networks. Specifically, we focus on content shared by users on Twitter and Instagram in order to design a system capable of fusing data across multiple networks. Past work has demonstrated that it is indeed possible to detect physical events using various social network platforms. However, many of these signals need corroboration in order to handle events that lack proper support within a single network. We leverage this insight to design an unsupervised approach that can correlate event signals across multiple social networks. Our algorithm can detect events and identify the location of the event occurrence. We evaluate our algorithm using both simulations and real world datasets collected using Twitter and Instagram. The results indicate that our algorithm significantly improves false positive elimination and attains high precision compared to baseline methods on real world datasets.",
"title": ""
},
{
"docid": "03ec20a448dc861d8ba8b89b0963d52d",
"text": "Social Web 2.0 features have become a vital component in a variety of multimedia systems, e.g., YouTube and Last.fm. Interestingly, adult video websites are also starting to adopt these Web 2.0 principles, giving rise to the term “Porn 2.0”. This paper examines a large Porn 2.0 social network, through data covering 563k users. We explore a number of unusual behavioural aspects that set this apart from more traditional multimedia social networks. We particularly focus on the role of gender and sexuality, to understand how these different groups behave. A number of key differences are discovered relating to social demographics, modalities of interaction and content consumption habits, shedding light on this understudied area of online activity.",
"title": ""
},
{
"docid": "96a38b8b6286169cdd98aa6778456e0c",
"text": "Data mining is on the interface of Computer Science andStatistics, utilizing advances in both disciplines to make progressin extracting information from large databases. It is an emergingfield that has attracted much attention in a very short period oftime. This article highlights some statistical themes and lessonsthat are directly relevant to data mining and attempts to identifyopportunities where close cooperation between the statistical andcomputational communities might reasonably provide synergy forfurther progress in data analysis.",
"title": ""
},
{
"docid": "25d25da610b4b3fe54b665d55afc3323",
"text": "We address the problem of vision-based navigation in busy inner-city locations, using a stereo rig mounted on a mobile platform. In this scenario semantic information becomes important: rather than modelling moving objects as arbitrary obstacles, they should be categorised and tracked in order to predict their future behaviour. To this end, we combine classical geometric world mapping with object category detection and tracking. Object-category specific detectors serve to find instances of the most important object classes (in our case pedestrians and cars). Based on these detections, multi-object tracking recovers the objects’ trajectories, thereby making it possible to predict their future locations, and to employ dynamic path planning. The approach is evaluated on challenging, realistic video sequences recorded at busy inner-city locations.",
"title": ""
}
] |
scidocsrr
|
885af4ec364f295e717da5d6e0248ced
|
A Bayesian Foundation for Individual Learning Under Uncertainty
|
[
{
"docid": "a4c76e58074a42133a59a31d9022450d",
"text": "This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.",
"title": ""
}
] |
[
{
"docid": "b0b024072e7cde0b404a9be5862ecdd1",
"text": "Recent studies have led to the recognition of the epidermal growth factor receptor HER3 as a key player in cancer, and consequently this receptor has gained increased interest as a target for cancer therapy. We have previously generated several Affibody molecules with subnanomolar affinity for the HER3 receptor. Here, we investigate the effects of two of these HER3-specific Affibody molecules, Z05416 and Z05417, on different HER3-overexpressing cancer cell lines. Using flow cytometry and confocal microscopy, the Affibody molecules were shown to bind to HER3 on three different cell lines. Furthermore, the receptor binding of the natural ligand heregulin (HRG) was blocked by addition of Affibody molecules. In addition, both molecules suppressed HRG-induced HER3 and HER2 phosphorylation in MCF-7 cells, as well as HER3 phosphorylation in constantly HER2-activated SKBR-3 cells. Importantly, Western blot analysis also revealed that HRG-induced downstream signalling through the Ras-MAPK pathway as well as the PI3K-Akt pathway was blocked by the Affibody molecules. Finally, in an in vitro proliferation assay, the two Affibody molecules demonstrated complete inhibition of HRG-induced cancer cell growth. Taken together, our findings demonstrate that Z05416 and Z05417 exert an anti-proliferative effect on two breast cancer cell lines by inhibiting HRG-induced phosphorylation of HER3, suggesting that the Affibody molecules are promising candidates for future HER3-targeted cancer therapy.",
"title": ""
},
{
"docid": "1a6ece40fa87e787f218902eba9b89f7",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
},
{
"docid": "659cc5b1999c962c9fb0b3544c8b928a",
"text": "During the recent years the mainstream framework for HCI research — the informationprocessing cognitive psychology —has gained more and more criticism because of serious problems in applying it both in research and practical design. In a debate within HCI research the capability of information processing psychology has been questioned and new theoretical frameworks searched. This paper presents an overview of the situation and discusses potentials of Activity Theory as an alternative framework for HCI research and design.",
"title": ""
},
{
"docid": "e5bf05ae6700078dda83eca8d2f65cd4",
"text": "We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-theart results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.",
"title": ""
},
{
"docid": "bc48242b9516948dc0ab95f1bead053f",
"text": "This article presents the semantic portal MuseumFinland for publishing heterogeneous museum collections on the Semantic Web. It is shown how museums with their semantically rich and interrelated collection content can create a large, consolidated semantic collection portal together on the web. By sharing a set of ontologies, it is possible to make collections semantically interoperable, and provide the museum visitors with intelligent content-based search and browsing services to the global collection base. The architecture underlying MuseumFinland separates generic search and browsing services from the underlying application dependent schemas and metadata by a layer of logical rules. As a result, the portal creation framework and software developed has been applied successfully to other domains as well. MuseumFinland got the Semantic Web Challence Award (second prize) in 2004. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ae6feb822ce68f336d831559b17c4c31",
"text": "Despite years of intensive research, Byzantine fault-tolerant (BFT) systems have not yet been adopted in practice. This is due to additional cost of BFT in terms of resources, protocol complexity and performance, compared with crash fault-tolerance (CFT). This overhead of BFT comes from the assumption of a powerful adversary that can fully control not only the Byzantine faulty machines, but at the same time also the message delivery schedule across the entire network, effectively inducing communication asynchrony and partitioning otherwise correct machines at will. To many practitioners, however, such strong attacks appear irrelevant. In this paper, we introduce cross fault tolerance or XFT, a novel approach to building reliable and secure distributed systems and apply it to the classical state-machine replication (SMR) problem. In short, an XFT SMR protocol provides the reliability guarantees of widely used asynchronous CFT SMR protocols such as Paxos and Raft, but also tolerates Byzantine faults in combination with network asynchrony, as long as a majority of replicas are correct and communicate synchronously. This allows the development of XFT systems at the price of CFT (already paid for in practice), yet with strictly stronger resilience than CFT — sometimes even stronger than BFT itself. As a showcase for XFT, we present XPaxos, the first XFT SMR protocol, and deploy it in a geo-replicated setting. Although it offers much stronger resilience than CFT SMR at no extra resource cost, the performance of XPaxos matches that of the state-of-the-art CFT protocols.",
"title": ""
},
{
"docid": "74497fc5d50ad6047d428714bfbba6b8",
"text": "Newer models for interacting with wireless sensors such as Internet of Things and Sensor Cloud aim to overcome restricted resources and efficiency. The Missouri S&T (science and technology) sensor cloud enables different networks, spread in a huge geographical area, to connect together and be employed simultaneously by multiple users on demand. Virtual sensors, which are at the core of this sensor cloud architecture, assist in creating a multiuser environment on top of resource-constrained physical wireless sensors and can help in supporting multiple applications.",
"title": ""
},
{
"docid": "fcd80cdb7d2d629f767f04b38c696355",
"text": "Electronic commerce and electronic business greatly need new payment systems that will support their further development. To better understand problems and perspectives of the electronic payment systems this article describes a classification and different characteristic aspects of payment systems. It suggests distinctions between payment systems and mediating systems, and is trying to illustrate advantages and limitations of diverse categories of payment systems using the defined characteristics. It is highlighting importance of userrelated aspects in design and introduction of electronic payment systems for mass customers.",
"title": ""
},
{
"docid": "e48c260c2a0ef52c1aff8d11a3dc071e",
"text": "Current transformer (CT) saturation can cause protective relay mal-operation or even prevent tripping. The wave shape of the secondary current is severely distorted as the CT is forced into deep saturation when the residual flux in the core adds to the flux change caused by faults. In this paper, a morphological lifting scheme is proposed to extract features contained in the waveform of the signal. The detection of the CT saturation is accurately achieved and the points of the inflection, where the saturation begins and ends, are found with the scheme used. This paper also presents a compensation algorithm, based upon the detection results, to reconstruct healthy secondary currents. The proposed morphological lifting scheme and compensation algorithm are demonstrated on a sample power system. The simulation results clearly indicate that they can successfully detect and compensate the distorted secondary current of a saturated CT with residual flux.",
"title": ""
},
{
"docid": "d087d4d0bb41f655f0743cf8e0963f0c",
"text": "A GTO current source inverter which consists of six main GTO's, two auxiliary GTO's, and three capacitors is presented. This inverter can supply both the sinusoidal voltage and current to the motor by pulsewidth modulation (PWM) techniques. The normal PWM pattern produced by two control signals with the carrier and the modulating waves and the optimal PWM pattern determined by the harmonic analysis are described. The experimental waveforms for 2.2-kW induction motor drives are given and the circuit operation of this inverter in the PWM technique is clearly shown. In addition, the steady-state characteristics of this inverter-induction motor drive system are analyzed by the state-variable methods, and a close agreement between the analyzed and the experimental waveforms is obtained. It is shown that the harmonic components are eliminated or reduced by using the optimal PWM pattern, and the new inverter with sinusoidal current and voltage is very excellent for ac motor drive.",
"title": ""
},
{
"docid": "7c3f14bbbb3cf2bbe7c9caaf42361445",
"text": "In this paper, we present a method for generating fast conceptual urban design prototypes. We synthesize spatial configurations for street networks, parcels and building volumes. Therefore, we address the problem of implementing custom data structures for these configurations and how the generation process can be controlled and parameterized. We exemplify our method by the development of new components for Grasshopper/Rhino3D and their application in the scope of selected case studies. By means of these components, we show use case applications of the synthesis algorithms. In the conclusion, we reflect on the advantages of being able to generate fast urban design prototypes, but we also discuss the disadvantages of the concept and the usage of Grasshopper as a user interface.",
"title": ""
},
{
"docid": "46ea64a204ae93855676146d84063c1a",
"text": "PURPOSE\nThe present study examined the utility of 2 measures proposed as markers of specific language impairment (SLI) in identifying specific impairments in language or working memory in school-age children.\n\n\nMETHOD\nA group of 400 school-age children completed a 5-min screening consisting of nonword repetition and sentence recall. A subset of low (n = 52) and average (n = 38) scorers completed standardized tests of language, short-term and working memory, and nonverbal intelligence.\n\n\nRESULTS\nApproximately equal numbers of children were identified with specific impairments in either language or working memory. A group about twice as large had deficits in both language and working memory. Sensitivity of the screening measure for both SLI and specific working memory impairments was 84% or greater, although specificity was closer to 50%. Sentence recall performance below the 10th percentile was associated with sensitivity and specificity values above 80% for SLI.\n\n\nCONCLUSIONS\nDevelopmental deficits may be specific to language or working memory, or include impairments in both areas. Sentence recall is a useful clinical marker of SLI and combined language and working memory impairments.",
"title": ""
},
{
"docid": "ec9b3423e0a71e8b9457f10eb874f2bc",
"text": "PURPOSE\nThe term \"buried penis\" has been applied to a variety of penile abnormalities and includes an apparent buried penis that is obvious at birth. The purpose of this study was to examine prospectively the congenital buried penis and to evaluate an operative technique for its management.\n\n\nMATERIALS AND METHODS\nA total of 31 males 2 to 28 months old (mean age 12.3 months) with a congenital buried penis underwent surgical correction of the anomaly. Measurements were made of the penile shaft skin, inner leaf of the prepuce, glans length and stretched penile length. Observations of the subcutaneous tissue of the penis were made. The outer leaf of the prepuce was resected, following which covering of the penile shaft was accomplished with a combination of the penile shaft skin and the inner leaf of the prepuce.\n\n\nRESULTS\nStretched penile lengths ranged from 2.3 to 4.1 cm (mean 3.1). The glans length from the tip of the glans dorsally to the corona ranged from 0.9 to 1.6 cm (mean 1.2). The inner leaf of the prepuce ranged from 0.9 to 2.2 cm (mean 1.5) in length, while the dorsal penile skin lengths were 1 to 1.6 cm (mean 0.8). In all patients complete shaft coverage was accomplished using a combination of varying degrees of penile shaft skin and inner leaf of the prepuce. In no case was there a requirement for either unfurling of the inner and outer leaf of the prepuce or mobilization of scrotal flaps to accomplish shaft coverage. All patients healed well and have done well with a followup of 6 months to 1 year.\n\n\nCONCLUSIONS\nCongenital buried penis is a syndrome consisting of a paucity of penile shaft skin and a short penile shaft. The anomaly may be successfully repaired by carefully preserving a length of inner leaf of the prepuce sufficient to cover, in some instances, the length of the penile shaft. Anchoring of the penile skin to the shaft is not recommended.",
"title": ""
},
{
"docid": "7b4e9043e11d93d8152294f410390f6d",
"text": "In this paper, we present a series of methods to authenticate a user with a graphical password. To that end, we employ the user¿s personal handheld device as the password decoder and the second factor of authentication. In our methods, a service provider challenges the user with an image password. To determine the appropriate click points and their order, the user needs some hint information transmitted only to her handheld device. We show that our method can overcome threats such as key-loggers, weak password, and shoulder surfing. With the increasing popularity of handheld devices such as cell phones, our approach can be leveraged by many organizations without forcing the user to memorize different passwords or carrying around different tokens.",
"title": ""
},
{
"docid": "46632965f75d0b07c8f35db944277ab1",
"text": "The aim of this cross-sectional study was to assess the complications associated with tooth supported fixed dental prosthesis amongst patients reporting at University College of Dentistry Lahore, Pakistan. An interview based questionnaire was used on 112 patients followed by clinical oral examination by two calibrated dentists. Approximately 95% participants were using porcelain fused to metal prosthesis with 60% of prosthesis being used in posterior segments of mouth. Complications like dental caries, coronal abutment fracture, radicular abutment fracture, occlusal interferences, root canal failures and decementations were more significantly associated with crowns than bridges (p=0.000). On the other hand esthetic issues, periapical lesions, periodontal problems, porcelain fractures and metal damage were more commonly associated with bridges (p=0.000). All cases of dental caries reported were associated with acrylic crown and bridges, whereas all coronal abutment fractures were associated with metal prosthesis (p=0.000). A significantly higher number of participants who got their fixed dental prosthesis from other sources i.e. Paramedics, technicians, dental assistants or unqualified dentists had periapical lesions, decementations, esthetic issues and periodontal diseases. This association was found to be statistically significant (p=0.000). Complications associated with fixed dental prosthesis like root canal failures, decementations, periapical lesions and periodontal disease were more significantly associated with prosthesis fabricated by other sources over the period of 5 to 10 years.",
"title": ""
},
{
"docid": "7cc9b6f1837d992b64071e2149e81a9a",
"text": "This article presents an application of Augmented Reality technology for interior design. Plus, an Educational Interior Design Project is reviewed. Along with the dramatic progress of digital technology, virtual information techniques are also required for architectural projects. Thus, the new technology of Augmented Reality offers many advantages for digital architectural design and construction fields. AR is also being considered as a new design approach for interior design. In an AR environment, the virtual furniture can be displayed and modified in real-time on the screen, allowing the user to have an interactive experience with the virtual furniture in a real-world environment. Here, AR environment is exploited as the new working environment for architects in architectural design works, and then they can do their work conveniently as such collaborative discussion through AR environment. Finally, this study proposes a new method for applying AR technology to interior design work, where a user can view virtual furniture and communicate with 3D virtual furniture data using a dynamic and flexible user interface. Plus, all the properties of the virtual furniture can be adjusted using occlusionbased interaction method for a Tangible Augmented Reality.",
"title": ""
},
{
"docid": "98cef46a572d3886c8a11fa55f5ff83c",
"text": "Deep convolutional neural networks (CNNs) have proven highly effective for visual recognition, where learning a universal representation from activations of convolutional layer plays a fundamental problem. In this paper, we present Fisher Vector encoding with Variational Auto-Encoder (FV-VAE), a novel deep architecture that quantizes the local activations of convolutional layer in a deep generative model, by training them in an end-to-end manner. To incorporate FV encoding strategy into deep generative models, we introduce Variational Auto-Encoder model, which steers a variational inference and learning in a neural network which can be straightforwardly optimized using standard stochastic gradient method. Different from the FV characterized by conventional generative models (e.g., Gaussian Mixture Model) which parsimoniously fit a discrete mixture model to data distribution, the proposed FV-VAE is more flexible to represent the natural property of data for better generalization. Extensive experiments are conducted on three public datasets, i.e., UCF101, ActivityNet, and CUB-200-2011 in the context of video action recognition and fine-grained image classification, respectively. Superior results are reported when compared to state-of-the-art representations. Most remarkably, our proposed FV-VAE achieves to-date the best published accuracy of 94.2% on UCF101.",
"title": ""
},
{
"docid": "6c52a9b8e7075ba78020f7ac246d7dd6",
"text": "A microgrid is a controllable component of the smart grid defined as a part of distribution network capable of supplying its own local load even in the case of disconnection from the upstream network. Microgrids incorporate large amount of renewable and non-renewable distributed generation (DG) that are connected to the system either directly or by power electronics (PE) interface. The diversity of technologies used in DGs and loads, high penetration of DGs, economic operation of DGs, dynamics of low-inertia conventional DGs and PE interfaced inertialess DGs and smart operation by means of enhanced communication infrastructure have raised challenges in widespread utilization of microgrids as basis of smart grids. Power quality, protection, economic and secure operation, active management, communication, dynamics and control of microgrids are among the most important issues under research both in academy and industry. Technical concerns over dynamics of microgrids especially in autonomous (island) mode necessitate revision of current paradigms in control of energy systems. This paper addresses current challenges towards controlling microgrids and surveys dynamic modeling, stability and control of microgrids. Future trends in realizing smart grids through aggregation of microgrids and research needs in this path are discussed at the end of this paper.",
"title": ""
},
{
"docid": "617e76bde28655d92eac1e22f5f56e32",
"text": "OBJECTIVE\nTo determine overall, test-retest and inter-rater reliability of posture indices among persons with idiopathic scoliosis.\n\n\nDESIGN\nA reliability study using two raters and two test sessions.\n\n\nSETTING\nTertiary care paediatric centre.\n\n\nPARTICIPANTS\nSeventy participants aged between 10 and 20 years with different types of idiopathic scoliosis (Cobb angle 15 to 60°) were recruited from the scoliosis clinic.\n\n\nMAIN OUTCOME MEASURES\nBased on the XY co-ordinates of natural reference points (e.g., eyes) as well as markers placed on several anatomical landmarks, 32 angular and linear posture indices taken from digital photographs in the standing position were calculated from a specially developed software program. Generalisability theory served to estimate the reliability and standard error of measurement (SEM) for the overall, test-retest and inter-rater designs. Bland and Altman's method was also used to document agreement between sessions and raters.\n\n\nRESULTS\nIn the random design, dependability coefficients demonstrated a moderate level of reliability for six posture indices (ϕ=0.51 to 0.72) and a good level of reliability for 26 posture indices out of 32 (ϕ≥0.79). Error attributable to marker placement was negligible for most indices. Limits of agreement and SEM values were larger for shoulder protraction, trunk list, Q angle, cervical lordosis and scoliosis angles. The most reproducible indices were waist angles and knee valgus and varus.\n\n\nCONCLUSIONS\nPosture can be assessed in a global fashion from photographs in persons with idiopathic scoliosis. Despite the good reliability of marker placement, other studies are needed to minimise measurement errors in order to provide a suitable tool for monitoring change in posture over time.",
"title": ""
},
{
"docid": "6182626269d38c81fa63eb2cab91caca",
"text": "Environmental management, a term encompassing environmental planning, protection, monitoring, assessment, research, education, conservation and sustainable use of resources, is now accepted as a major guiding factor for sustainable development at the regional and national level. It is now being increasingly recognized that environmental factors and ecological imperatives must be in built to the total planning process if the long-term goal of making industrial development sustainable is to be achieved. Here we will try to define and discuss the role of Environmental Analysis in the strategic management process of organization. The present complex world require as far as is feasible, it consider impact of important factors related to organizations in strategic planning. The strategic planning of business includes all functional subdivisions and forwards them in a united direction. One of these subsystems is human resource management. Strategic human resource management comes after the strategic planning, and followed by strategic human resource planning as a major activity in all the industries. In strategic planning, it can use different analytical methods and techniques that one of them is PEST analysis. This paper introduces how to apply it in a new manner.",
"title": ""
}
] |
scidocsrr
|
37763c0631aa990242d020566f824f2b
|
Heterogeneous Knowledge Transfer in Video Emotion Recognition, Attribution and Summarization
|
[
{
"docid": "69504625b05c735dd80135ef106a8677",
"text": "The amount of videos available on the Web is growing explosively. While some videos are very interesting and receive high rating from viewers, many of them are less interesting or even boring. This paper conducts a pilot study on the understanding of human perception of video interestingness, and demonstrates a simple computational method to identify more interesting videos. To this end we first construct two datasets of Flickr and YouTube videos respectively. Human judgements of interestingness are collected and used as the groundtruth for training computational models. We evaluate several off-the-shelf visual and audio features that are potentially useful for predicting interestingness on both datasets. Results indicate that audio and visual features are equally important and the combination of both modalities shows very promising results.",
"title": ""
}
] |
[
{
"docid": "a0b147e6baae3ea7622446da0b8d8e26",
"text": "The Web has come a long way since its invention by Berners-Lee, when it focused essentially on visualization and presentation of content for human consumption (Syntactic Web), to a Web providing meaningful content, facilitating the integration between people and machines (Semantic Web). This paper presents a survey of different tools that provide the enrichment of the Web with understandable annotation, in order to make its content available and interoperable between systems. We can group Semantic Annotation tools into the diverse dimensions: dynamicity, storage, information extraction process, scalability and customization. The analysis of the different annotation tools shows that (semi-)automatic and automatic systems aren't as efficient as needed without human intervention and will continue to evolve to solve the challenge. Microdata, RDFa and the new HTML5 standard will certainly bring new contributions to this issue.",
"title": ""
},
{
"docid": "00bc7c810946fa30bf1fdc66e8fb7fc2",
"text": "Voluntary motor commands produce two kinds of consequences. Initially, a sensory consequence is observed in terms of activity in our primary sensory organs (e.g., vision, proprioception). Subsequently, the brain evaluates the sensory feedback and produces a subjective measure of utility or usefulness of the motor commands (e.g., reward). As a result, comparisons between predicted and observed consequences of motor commands produce two forms of prediction error. How do these errors contribute to changes in motor commands? Here, we considered a reach adaptation protocol and found that when high quality sensory feedback was available, adaptation of motor commands was driven almost exclusively by sensory prediction errors. This form of learning had a distinct signature: as motor commands adapted, the subjects altered their predictions regarding sensory consequences of motor commands, and generalized this learning broadly to neighboring motor commands. In contrast, as the quality of the sensory feedback degraded, adaptation of motor commands became more dependent on reward prediction errors. Reward prediction errors produced comparable changes in the motor commands, but produced no change in the predicted sensory consequences of motor commands, and generalized only locally. Because we found that there was a within subject correlation between generalization patterns and sensory remapping, it is plausible that during adaptation an individual's relative reliance on sensory vs. reward prediction errors could be inferred. We suggest that while motor commands change because of sensory and reward prediction errors, only sensory prediction errors produce a change in the neural system that predicts sensory consequences of motor commands.",
"title": ""
},
{
"docid": "ea646c7d5c04a44e33fefc87818c2a11",
"text": "Learning to rank has become an important research topic in machine learning. While most learning-to-rank methods learn the ranking functions by minimizing loss functions, it is the ranking measures (such as NDCG and MAP) that are used to evaluate the performance of the learned ranking functions. In this work, we reveal the relationship between ranking measures and loss functions in learningto-rank methods, such as Ranking SVM, RankBoost, RankNet, and ListMLE. We show that the loss functions of these methods are upper bounds of the measurebased ranking errors. As a result, the minimization of these loss functions will lead to the maximization of the ranking measures. The key to obtaining this result is to model ranking as a sequence of classification tasks, and define a so-called essential loss for ranking as the weighted sum of the classification errors of individual tasks in the sequence. We have proved that the essential loss is both an upper bound of the measure-based ranking errors, and a lower bound of the loss functions in the aforementioned methods. Our proof technique also suggests a way to modify existing loss functions to make them tighter bounds of the measure-based ranking errors. Experimental results on benchmark datasets show that the modifications can lead to better ranking performances, demonstrating the correctness of our theoretical analysis.",
"title": ""
},
{
"docid": "61160371b2a85f1b937105cc43d3c70d",
"text": "Regular expressions are extremely useful, because they allow us to work with text in terms of patterns. They are considered the most sophisticated means of performing operations such as string searching, manipulation, validation, and formatting in all applications that deal with text data. Character recognition problem scenarios in sequence analysis that are ideally suited for the application of regular expression algorithms. This paper describes a use of regular expressions in this problem domain, and demonstrates how the effective use of regular expressions that can serve to facilitate more efficient and more effective character recognition.",
"title": ""
},
{
"docid": "f7fc47986046f9d02f9b89f244341123",
"text": "Incorporating the body dynamics of compliant robots into their controller architectures can drastically reduce the complexity of locomotion control. An extreme version of this embodied control principle was demonstrated in highly compliant tensegrity robots, for which stable gait generation was achieved by using only optimized linear feedback from the robot's sensors to its actuators. The morphology of quadrupedal robots has previously been used for sensing and for control of a compliant spine, but never for gait generation. In this paper, we successfully apply embodied control to the compliant, quadrupedal Oncilla robot. As initial experiments indicated that mere linear feedback does not suffice, we explore the minimal requirements for robust gait generation in terms of memory and nonlinear complexity. Our results show that a memoryless feedback controller can generate a stable trot by learning the desired nonlinear relation between the input and the output signals. We believe this method can provide a very useful tool for transferring knowledge from open loop to closed loop control on compliant robots.",
"title": ""
},
{
"docid": "7b44c4ec18d01f46fdd513780ba97963",
"text": "This paper presents a robust approach for road marking detection and recognition from images captured by an embedded camera mounted on a car. Our method is designed to cope with illumination changes, shadows, and harsh meteorological conditions. Furthermore, the algorithm can effectively group complex multi-symbol shapes into an individual road marking. For this purpose, the proposed technique relies on MSER features to obtain candidate regions which are further merged using density-based clustering. Finally, these regions of interest are recognized using machine learning approaches. Worth noting, the algorithm is versatile since it does not utilize any prior information about lane position or road space. The proposed method compares favorably to other existing works through a large number of experiments on an extensive road marking dataset.",
"title": ""
},
{
"docid": "c2e0166a7604836cc33836d1ca86e335",
"text": "Owing to the dramatic mobile IP growth, the emerging Internet of Things, and cloud-based applications, wireless networking is witnessing a paradigm shift. By fully exploiting spatial degrees of freedom, massive multiple-input-multiple-output (MIMO) systems promise significant gains in data rates and link reliability. Although the research community has recognized the theoretical benefits of these systems, building the hardware of such complex systems is a challenge in practice. This paper presents a time division duplex (TDD)-based 128-antenna massive MIMO prototype system from theory to reality. First, an analytical signal model is provided to facilitate the setup of a feasible massive MIMO prototype system. Second, a link-level simulation consistent with practical TDDbased massive MIMO systems is conducted to guide and validate the massive MIMO system design. We design and implement the TDDbased 128-antenna massive MIMO prototype system with the guidelines obtained from the link-level simulation. Uplink real-time video transmission and downlink data transmission under the configuration of multiple single-antenna users are achieved. Comparisons with state-of-the-art prototypes demonstrate the advantages of the proposed system in terms of antenna number, bandwidth, latency, and throughput. The proposed system is also equipped with scalability, which makes the system applicable to a wide range of massive scenarios.",
"title": ""
},
{
"docid": "ee6906550c2f9d294e411688bae5db71",
"text": "This position paper formalises an abstract model for complex negotiation dialogue. This model is to be used for the benchmark of optimisation algorithms ranging from Reinforcement Learning to Stochastic Games, through Transfer Learning, One-Shot Learning or others.",
"title": ""
},
{
"docid": "6e4dcb451292cc38cb72300a24135c1b",
"text": "This survey gives state-of-the-art of genetic algorithm (GA) based clustering techniques. Clustering is a fundamental and widely applied method in understanding and exploring a data set. Interest in clustering has increased recently due to the emergence of several new areas of applications including data mining, bioinformatics, web use data analysis, image analysis etc. To enhance the performance of clustering algorithms, Genetic Algorithms (GAs) is applied to the clustering algorithm. GAs are the best-known evolutionary techniques. The capability of GAs is applied to evolve the proper number of clusters and to provide appropriate clustering. This paper present some existing GA based clustering algorithms and their application to different problems and domains.",
"title": ""
},
{
"docid": "e7773b4aa444ceae84f100af5ac71034",
"text": "Location sharing services (LSS) like Foursquare, Gowalla, and Facebook Places support hundreds of millions of userdriven footprints (i.e., “checkins”). Those global-scale footprints provide a unique opportunity to study the social and temporal characteristics of how people use these services and to model patterns of human mobility, which are significant factors for the design of future mobile+location-based services, traffic forecasting, urban planning, as well as epidemiological models of disease spread. In this paper, we investigate 22 million checkins across 220,000 users and report a quantitative assessment of human mobility patterns by analyzing the spatial, temporal, social, and textual aspects associated with these footprints. We find that: (i) LSS users follow the “Lèvy Flight” mobility pattern and adopt periodic behaviors; (ii) While geographic and economic constraints affect mobility patterns, so does individual social status; and (iii) Content and sentiment-based analysis of posts associated with checkins can provide a rich source of context for better understanding how users engage with these services.",
"title": ""
},
{
"docid": "11112e1738bd27f41a5b57f07b71292c",
"text": "Rotor-cage fault detection in inverter-fed induction machines is still difficult nowadays as the dynamics introduced by the control or load influence the fault-indicator signals commonly applied. In addition, detection is usually possible only when the machine is operated above a specific load level to generate a significant rotor-current magnitude. This paper proposes a new method of detecting rotor-bar defects at zero load and almost at standstill. The method uses the standard current sensors already present in modern industrial inverters and, hence, is noninvasive. It is thus well suited as a start-up test for drives. By applying an excitation with voltage pulses using the switching of the inverter and then measuring the resulting current slope, a new fault indicator is obtained. As a result, it is possible to clearly identify the fault-induced asymmetry in the machine's transient reactances. Although the transient-flux linkage cannot penetrate the rotor because of the cage, the faulty bar locally influences the zigzag flux, leading to a significant change in the transient reactances. Measurement results show the applicability and sensitivity of the proposed method.",
"title": ""
},
{
"docid": "80d8a8c09e9918981d1a93e5bccf45ba",
"text": "In this paper, we study a multi-residential electricity load scheduling problem with multi-class appliances in smart grid. Compared with the previous works in which only limited types of appliances are considered or only single residence grids are considered, we model the grid system more practically with jointly considering multi-residence and multi-class appliance. We formulate an optimization problem to maximize the sum of the overall satisfaction levels of residences which is defined as the sum of utilities of the residential customers minus the total cost for energy consumption. Then, we provide an electricity load scheduling algorithm by using a PL-Generalized Benders Algorithm which operates in a distributed manner while protecting the private information of the residences. By applying the algorithm, we can obtain the near-optimal load scheduling for each residence, which is shown to be very close to the optimal scheduling, and also obtain the lower and upper bounds on the optimal sum of the overall satisfaction levels of all residences, which are shown to be very tight.",
"title": ""
},
{
"docid": "f15f72e8b513b0a9b7ddb9b73a559571",
"text": "Teenagers are among the most prolific users of social network sites (SNS). Emerging studies find that youth spend a considerable portion of their daily life interacting through social media. Subsequently, questions and controversies emerge about the effects SNS have on adolescent development. This review outlines the theoretical frameworks researchers have used to understand adolescents and SNS. It brings together work from disparate fields that examine the relationship between SNS and social capital, privacy, youth safety, psychological well-being, and educational achievement.These research strands speak to high-profile concerns and controversies that surround youth participation in these online communities, and offer ripe areas for future research.",
"title": ""
},
{
"docid": "eb32ce661a0d074ce90861793a2e4de7",
"text": "A new transfer function from control voltage to duty cycle, the closed-current loop, which captures the natural sampling effect is used to design a controller for the voltage-loop of a pulsewidth modulated (PWM) dc-dc converter operating in continuous-conduction mode (CCM) with peak current-mode control (PCM). This paper derives the voltage loop gain and the closed-loop transfer function from reference voltage to output voltage. The closed-loop transfer function from the input voltage to the output voltage, or the closed-loop audio-susceptibility is derived. The closed-loop transfer function from output current to output voltage, or the closed loop output impedance is also derived. The derivation is performed using an averaged small-signal model of the example boost converter for CCM. Experimental verification is presented. The theoretical and experimental results were in good agreement, confirming the validity of the transfer functions derived.",
"title": ""
},
{
"docid": "3d7406edd98fbdf6587076f88b191569",
"text": "I am the very model of a modern Major-General, I've information vegetable, animal, and mineral, I know the kings of England, and I quote the fights historical From Marathon to Waterloo, in order categorical... Imagine that you are an analyst with an investment firm that tracks airline stocks. You're given the task of determining the relationship (if any) between airline announcements of fare increases and the behavior of their stocks the next day. Historical data about stock prices is easy to come by, but what about the airline an-nouncements? You will need to know at least the name of the airline, the nature of the proposed fare hike, the dates of the announcement, and possibly the response of other airlines. Fortunately, these can be all found in news articles like this one: Citing high fuel prices, United Airlines said Friday it has increased fares by $6 per round trip on flights to some cities also served by lower-cost carriers. American Airlines, a unit of AMR Corp., immediately matched the move, spokesman Tim Wagner said. United, a unit of UAL Corp., said the increase took effect Thursday and applies to most routes where it competes against discount carriers, such as Chicago to Dallas and Denver to San Francisco. This chapter presents techniques for extracting limited kinds of semantic content from text. This process of information extraction (IE), turns the unstructured information extraction information embedded in texts into structured data, for example for populating a relational database to enable further processing. The first step in most IE tasks is to find the proper names or named entities mentioned in a text. The task of named entity recognition (NER) is to find each named entity recognition mention of a named entity in the text and label its type. What constitutes a named entity type is application specific; these commonly include people, places, and organizations but also more specific entities from the names of genes and proteins (Cohen and Demner-Fushman, 2014) to the names of college courses (McCallum, 2005). Having located all of the mentions of named entities in a text, it is useful to link, or cluster, these mentions into sets that correspond to the entities behind the mentions, for example inferring that mentions of United Airlines and United in the sample text refer to the same real-world entity. We'll defer discussion of this task of coreference resolution until Chapter 23. The task …",
"title": ""
},
{
"docid": "6241cb482e386435be2e33caf8d94216",
"text": "A fog radio access network (F-RAN) is studied, in which $K_T$ edge nodes (ENs) connected to a cloud server via orthogonal fronthaul links, serve $K_R$ users through a wireless Gaussian interference channel. Both the ENs and the users have finite-capacity cache memories, which are filled before the user demands are revealed. While a centralized placement phase is used for the ENs, which model static base stations, a decentralized placement is leveraged for the mobile users. An achievable transmission scheme is presented, which employs a combination of interference alignment, zero-forcing and interference cancellation techniques in the delivery phase, and the \\textit{normalized delivery time} (NDT), which captures the worst-case latency, is analyzed.",
"title": ""
},
{
"docid": "900d9747114db774abcb26bb01b8a89e",
"text": "Social-networking functions are increasingly embed ded in online rating systems. These functions alter the rating context in which c onsumer ratings are generated. In this paper, we empirically investigate online friends’ s ocial influence in online book ratings. Our quasi-experiment research design exploits the t emporal sequence of social-networking events and ratings and offers a n ew method for identifying social influence while accounting for the homophily effect . We find rating similarity between friends is significantly higher after the formation f the friend relationships, indicating that with social-networking functions, online ratin g contributors are socially nudged when giving their ratings. Additional exploration o f contingent factors suggests that social influence is stronger for older books and us ers who have smaller networks, and relatively more recent and extremely negative ratin gs cast more salient influence. Our study suggests that friends’ social influence is an important consideration when introducing social-networking functions to online r ating systems.",
"title": ""
},
{
"docid": "c20393a25f4e53be6df2bd49abf6635f",
"text": "This paper overviews NTCIR-13 Actionable Knowledge Graph (AKG) task. The task focuses on finding possible actions related to input entities and the relevant properties of such actions. AKG is composed of two subtasks: Action Mining (AM) and Actionable Knowledge Graph Generation (AKGG). Both subtasks are focused on English language. 9 runs have been submitted by 4 teams for the task. In this paper we describe both the subtasks, datasets, evaluation methods and the results of meta analyses.",
"title": ""
},
{
"docid": "8c28bfbbd2de24340b56f634d982c1ed",
"text": "The public perception of shared goods has changed substantially in the past few years. While co-owning properties has been widely accepted for a while (e.g., timeshares), the notion of sharing bikes, cars, or even rides on an on-demand basis is just now starting to gain widespread popularity. The emerging “sharing economy” is particularly interesting in the context of cities that struggle with population growth and increasing density. While sharing vehicles promises to reduce inner-city traffic, congestion, and pollution problems, the associated business models are not without problems themselves. Using agency theory, in this article we discuss existing shared mobility business models in an effort to unveil the optimal relationship between service providers (agents) and the local governments (principals) to achieve the common objective of sustainable mobility. Our findings show private or public models are fraught with conflicts, and point to a merit model as the most promising alignment of the strengths of agents and principals.",
"title": ""
},
{
"docid": "c90ab409ea2a9726f6ddded45e0fdea9",
"text": "About a decade ago, the Adult Attachment Interview (AAI; C. George, N. Kaplan, & M. Main, 1985) was developed to explore parents' mental representations of attachment as manifested in language during discourse of childhood experiences. The AAI was intended to predict the quality of the infant-parent attachment relationship, as observed in the Ainsworth Strange Situation, and to predict parents' responsiveness to their infants' attachment signals. The current meta-analysis examined the available evidence with respect to these predictive validity issues. In regard to the 1st issue, the 18 available samples (N = 854) showed a combined effect size of 1.06 in the expected direction for the secure vs. insecure split. For a portion of the studies, the percentage of correspondence between parents' mental representation of attachment and infants' attachment security could be computed (the resulting percentage was 75%; kappa = .49, n = 661). Concerning the 2nd issue, the 10 samples (N = 389) that were retrieved showed a combined effect size of .72 in the expected direction. According to conventional criteria, the effect sizes are large. It was concluded that although the predictive validity of the AAI is a replicated fact, there is only partial knowledge of how attachment representations are transmitted (the transmission gap).",
"title": ""
}
] |
scidocsrr
|
affb45576ee4afb4926af345c1ef2f5c
|
Forensic analysis of encrypted instant messaging applications on Android
|
[
{
"docid": "4e938aed527769ad65d85bba48151d21",
"text": "We provide a thorough description of all the artifacts that are generated by the messenger application Telegram on Android OS. We also provide interpretation of messages that are generated and how they relate to one another. Based on the results of digital forensics investigation and analysis in this paper, an analyst/investigator will be able to read, reconstruct and provide chronological explanations of messages which are generated by the user. Using three different smartphone device vendors and Android OS versions as the objects of our experiments, we conducted tests in a forensically sound manner.",
"title": ""
},
{
"docid": "e7a6082f1b6c441ebdde238cc8eb21c2",
"text": "We present the forensic analysis of the artifacts generated on Android smartphones by ChatSecure, a secure Instant Messaging application that provides strong encryption for transmitted and locally-stored data to ensure the privacy of its users. We show that ChatSecure stores local copies of both exchanged messages and files into two distinct, AES-256 encrypted databases, and we devise a technique able to decrypt them when the secret passphrase, chosen by the user as the initial step of the encryption process, is known. Furthermore, we show how this passphrase can be identified and extracted from the volatile memory of the device, where it persists for the entire execution of ChatSecure after having been entered by the user, thus allowing one Please, cite as: Cosimo Anglano, Massimo Canonico, Marco Guazzone, “Forensic Analysis of the ChatSecure Instant Messaging Application on Android Smartphones,” Digital Investigation, Volume 19, December 2016, Pages 44–59, DOI: 10.1016/j.diin.2016.10.001 Link to publisher: http://dx.doi.org/10.1016/j.diin.2016.10.001 ∗Corresponding author. Address: viale T. Michel 11, 15121 Alessandria (Italy). Phone: +39 0131 360188. Email addresses: [email protected] (Cosimo Anglano), [email protected] (Massimo Canonico), [email protected] (Marco Guazzone) Preprint submitted to Digital Investigation October 24, 2016 to carry out decryption even if the passphrase is not revealed by the user. Finally, we discuss how to analyze and correlate the data stored in the databases used by ChatSecure to identify the IM accounts used by the user and his/her buddies to communicate, as well as to reconstruct the chronology and contents of the messages and files that have been exchanged among them. For our study we devise and use an experimental methodology, based on the use of emulated devices, that provides a very high degree of reproducibility of the results, and we validate the results it yields against those obtained from real smartphones.",
"title": ""
},
{
"docid": "5dad207fe80469fe2b80d1f1e967575e",
"text": "As the geolocation capabilities of smartphones continue to improve, developers have continued to create more innovative applications that rely on this location information for their primary function. This can be seen with Niantic’s release of Pokémon GO, which is a massively multiplayer online role playing and augmented reality game. This game became immensely popular within just a few days of its release. However, it also had the propensity to be a distraction to drivers resulting in numerous accidents, and was used to as a tool by armed robbers to lure unsuspecting users into secluded areas. This facilitates a need for forensic investigators to be able to analyze the data within the application in order to determine if it may have been involved in these incidents. Because this application is new, limited research has been conducted regarding the artifacts that can be recovered from the application. In this paper, we aim to fill the gaps within the current research by assessing what forensically relevant information may be recovered from the application, and understanding the circumstances behind the creation of this information. Our research focuses primarily on the artifacts generated by the Upsight analytics platform, those contained within the bundles directory, and the Pokémon Go Plus accessory. Moreover, we present our new application specific analysis tool that is capable of extracting forensic artifacts from a backup of the Android application, and presenting them to an investigator in an easily readable format. This analysis tool exceeds the capabilities of UFED Physical Analyzer in processing Pokémon GO application data.",
"title": ""
}
] |
[
{
"docid": "1ae16863be5df70d33d4a7f6a685ab17",
"text": "Frank Chen • Zvi Drezner • Jennifer K. Ryan • David Simchi-Levi Decision Sciences Department, National University of Singapore, 119260 Singapore Department of MS & IS, California State University, Fullerton, California 92834 School of Industrial Engineering, Purdue University, West Lafayette, Indiana 47907 Department of IE & MS, Northwestern University, Evanston, Illinois 60208 [email protected] • [email protected] • [email protected] • [email protected]",
"title": ""
},
{
"docid": "538f5c7185a6a045ef2719e35b224181",
"text": "Robotics has been widely used in education as a learning tool to attract and motivate students in performing laboratory experiments within the context of mechatronics, electronics, microcomputer, and control. In this paper we propose an implementation of cascaded PID control algorithm for line follower balancing robot. The algorithm is implemented on ADROIT V1 education robot kits. The robot should be able to follow the trajectory given by the circular guideline while maintaining its balance condition. The controller also designed to control the speed of robot movement while tracking the line. To obtain this purpose, there are three controllers that is used in the same time; balancing controller, speed controller and the line following controller. Those three controllers are cascaded to control the movement of the robot that uses two motors as its actuator. From the experiment, the proposed cascaded PID controller shows an acceptable performance for the robot to maintain its balance position while following the circular line with the given speed setpoint.",
"title": ""
},
{
"docid": "8f13dd664f1d74c9684fc4431bcda3da",
"text": "The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data.",
"title": ""
},
{
"docid": "bf65f2c68808755cfcd13e6cc7d0ccab",
"text": "Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.",
"title": ""
},
{
"docid": "8f444ac95ff664e06e1194dd096e4f31",
"text": "Entity alignment aims to link entities and their counterparts among multiple knowledge graphs (KGs). Most existing methods typically rely on external information of entities such as Wikipedia links and require costly manual feature construction to complete alignment. In this paper, we present a novel approach for entity alignment via joint knowledge embeddings. Our method jointly encodes both entities and relations of various KGs into a unified low-dimensional semantic space according to a small seed set of aligned entities. During this process, we can align entities according to their semantic distance in this joint semantic space. More specifically, we present an iterative and parameter sharing method to improve alignment performance. Experiment results on realworld datasets show that, as compared to baselines, our method achieves significant improvements on entity alignment, and can further improve knowledge graph completion performance on various KGs with the favor of joint knowledge embeddings.",
"title": ""
},
{
"docid": "453191a57a9282248b0d5b8a85fa4ce0",
"text": "The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.",
"title": ""
},
{
"docid": "a45818ee6b078e3b153aae7995558e4f",
"text": "The reliability of the transmission of the switching signal of IGBT in a static converter is crucial. In fact, if the switching signals are badly transmitted, the power converter can be short-circuited with dramatic consequences. Thus, the operating of such a system can be stopped with heavy economic consequences, as it is the case for an electric train. Many techniques have been developed to achieve solutions for a safe transmission of switching signals with a good galvanic insulation. In very high-voltage, over 10 kV, an optimal solution is to use optic fibres. This technology is limited by the fibre degradation in high temperature atmosphere. Actually, this problem exists in trains. The common use of the radio frequency transmission (RFT) can be exploited to achieve an original IGBT wireless driver. This solution seems to be interesting because high temperature do not interfere with radio frequency transmission. However, radiated electromagnetic interferences (EMI) are drastically important in such an electrical environment, EMI can disturb the RFT. In order to optimise the transmission of switching signals, we have decided to transmit the signals through the energy supplying link. This last device is constituted by a double galvanic insulation transformer (DGIT). The difficulty is to transmit the energy, which is used for the IGBT driver supply and the switching signals in the same loop wire. The paper will highlight this aspect",
"title": ""
},
{
"docid": "0b1db23ae4767d7653e3198919706e99",
"text": "Greenhouse cultivation has evolved from simple covered rows of open-fields crops to highly sophisticated controlled environment agriculture (CEA) facilities that projected the image of plant factories for urban agriculture. The advances and improvements in CEA have promoted the scientific solutions for the efficient production of plants in populated cities and multi-story buildings. Successful deployment of CEA for urban agriculture requires many components and subsystems, as well as the understanding of the external influencing factors that should be systematically considered and integrated. This review is an attempt to highlight some of the most recent advances in greenhouse technology and CEA in order to raise the awareness for technology transfer and adaptation, which is necessary for a successful transition to urban agriculture. This study reviewed several aspects of a high-tech CEA system including improvements in the frame and covering materials, environment perception and data sharing, and advanced microclimate control and energy optimization models. This research highlighted urban agriculture and its derivatives, including vertical farming, rooftop greenhouses and plant factories which are the extensions of CEA and have emerged as a response to the growing population, environmental degradation, and urbanization that are threatening food security. Finally, several opportunities and challenges have been identified in implementing the integrated CEA and vertical farming for urban agriculture.",
"title": ""
},
{
"docid": "1e7b1bbaba8b9f9a1e28db42e18c23bf",
"text": "To use their pool of resources efficiently, distributed stream-processing systems push query operators to nodes within the network. Currently, these operators, ranging from simple filters to custom business logic, are placed manually at intermediate nodes along the transmission path to meet application-specific performance goals. Determining placement locations is challenging because network and node conditions change over time and because streams may interact with each other, opening venues for reuse and repositioning of operators. This paper describes a stream-based overlay network (SBON), a layer between a stream-processing system and the physical network that manages operator placement for stream-processing systems. Our design is based on a cost space, an abstract representation of the network and on-going streams, which permits decentralized, large-scale multi-query optimization decisions. We present an evaluation of the SBON approach through simulation, experiments on PlanetLab, and an integration with Borealis, an existing stream-processing engine. Our results show that an SBON consistently improves network utilization, provides low stream latency, and enables dynamic optimization at low engineering cost.",
"title": ""
},
{
"docid": "34d2c2349291bed154ef29f2f5472cb5",
"text": "We present a novel algorithm for automatically co-segmenting a set of shapes from a common family into consistent parts. Starting from over-segmentations of shapes, our approach generates the segmentations by grouping the primitive patches of the shapes directly and obtains their correspondences simultaneously. The core of the algorithm is to compute an affinity matrix where each entry encodes the similarity between two patches, which is measured based on the geometric features of patches. Instead of concatenating the different features into one feature descriptor, we formulate co-segmentation into a subspace clustering problem in multiple feature spaces. Specifically, to fuse multiple features, we propose a new formulation of optimization with a consistent penalty, which facilitates both the identification of most similar patches and selection of master features for two similar patches. Therefore the affinity matrices for various features are sparsity-consistent and the similarity between a pair of patches may be determined by part of (instead of all) features. Experimental results have shown how our algorithm jointly extracts consistent parts across the collection in a good manner.",
"title": ""
},
{
"docid": "e13e0a64d9c9ede58590d1cc113fbada",
"text": "Background The blood-brain barrier (BBB) has been hypothesized to play a role in migraine since the late 1970s. Despite this, limited investigation of the BBB in migraine has been conducted. We used the inflammatory soup rat model of trigeminal allodynia, which closely mimics chronic migraine, to determine the impact of repeated dural inflammatory stimulation on BBB permeability. Methods The sodium fluorescein BBB permeability assay was used in multiple brain regions (trigeminal nucleus caudalis (TNC), periaqueductal grey, frontal cortex, sub-cortex, and cortex directly below the area of dural activation) during the episodic and chronic stages of repeated inflammatory dural stimulation. Glial activation was assessed in the TNC via GFAP and OX42 immunoreactivity. Minocycline was tested for its ability to prevent BBB disruption and trigeminal sensitivity. Results No astrocyte or microglial activation was found during the episodic stage, but BBB permeability and trigeminal sensitivity were increased. Astrocyte and microglial activation, BBB permeability, and trigeminal sensitivity were increased during the chronic stage. These changes were only found in the TNC. Minocycline treatment prevented BBB permeability modulation and trigeminal sensitivity during the episodic and chronic stages. Discussion Modulation of BBB permeability occurs centrally within the TNC following repeated dural inflammatory stimulation and may play a role in migraine.",
"title": ""
},
{
"docid": "89dd97465c8373bb9dabf3cbb26a4448",
"text": "Unidirectional connections from the cortex to the matrix of the corpus striatum initiate the cortico-basal ganglia (BG)-thalamocortical loop, thought to be important in momentary action selection and in longer-term fine tuning of behavioural repertoire; a discrete set of striatal compartments, striosomes, has the complementary role of registering or anticipating reward that shapes corticostriatal plasticity. Re-entrant signals traversing the cortico-BG loop impact predominantly frontal cortices, conveyed through topographically ordered output channels; by contrast, striatal input signals originate from a far broader span of cortex, and are far more divergent in their termination. The term ‘disclosed loop’ is introduced to describe this organisation: a closed circuit that is open to outside influence at the initial stage of cortical input. The closed circuit component of corticostriatal afferents is newly dubbed ‘operative’, as it is proposed to establish the bid for action selection on the part of an incipient cortical action plan; the broader set of converging corticostriatal afferents is described as contextual. A corollary of this proposal is that every unit of the striatal volume, including the long, C-shaped tail of the caudate nucleus, should receive a mandatory component of operative input, and hence include at least one area of BG-recipient cortex amongst the sources of its corticostriatal afferents. Individual operative afferents contact twin classes of GABAergic striatal projection neuron (SPN), distinguished by their neurochemical character, and onward circuitry. This is the basis of the classic direct and indirect pathway model of the cortico-BG loop. Each pathway utilises a serial chain of inhibition, with two such links, or three, providing positive and negative feedback, respectively. Operative co-activation of direct and indirect SPNs is, therefore, pictured to simultaneously promote action, and to restrain it. The balance of this rival activity is determined by the contextual inputs, which summarise the external and internal sensory environment, and the state of ongoing behavioural priorities. Notably, the distributed sources of contextual convergence upon a striatal locus mirror the transcortical network harnessed by the origin of the operative input to that locus, thereby capturing a similar set of contingencies relevant to determining action. The disclosed loop formulation of corticostriatal and subsequent BG loop circuitry, as advanced here, refines the operating rationale of the classic model and allows the integration of more recent anatomical and physiological data, some of which can appear at variance with the classic model. Equally, it provides a lucid functional context for continuing cellular studies of SPN biophysics and mechanisms of synaptic plasticity.",
"title": ""
},
{
"docid": "37d353f5b8f0034209f75a3848580642",
"text": "(NR) is the first interactive data repository with a web-based platform for visual interactive analytics. Unlike other data repositories (e.g., UCI ML Data Repository, and SNAP), the network data repository (networkrepository.com) allows users to not only download, but to interactively analyze and visualize such data using our web-based interactive graph analytics platform. Users can in real-time analyze, visualize, compare, and explore data along many different dimensions. The aim of NR is to make it easy to discover key insights into the data extremely fast with little effort while also providing a medium for users to share data, visualizations, and insights. Other key factors that differentiate NR from the current data repositories is the number of graph datasets, their size, and variety. While other data repositories are static, they also lack a means for users to collaboratively discuss a particular dataset, corrections, or challenges with using the data for certain applications. In contrast, NR incorporates many social and collaborative aspects that facilitate scientific research, e.g., users can discuss each graph, post observations, and visualizations.",
"title": ""
},
{
"docid": "08084de7a702b87bd8ffc1d36dbf67ea",
"text": "In recent years, the mobile data traffic is increasing and many more frequency bands have been employed in cellular handsets. A simple π type tunable band elimination filter (BEF) with switching function has been developed using a wideband tunable surface acoustic wave (SAW) resonator circuit. The frequency of BEF is tuned approximately 31% by variable capacitors without spurious. In LTE low band, the arrangement of TX and RX frequencies is to be reversed in Band 13, 14 and 20 compared with the other bands. The steep edge slopes of the developed filter can be exchanged according to the resonance condition and switching. With combining the TX and RX tunable BEFs and the small sized broadband circulator, a new tunable duplexer has been fabricated, and its TX-RX isolation is proved to be more than 50dB in LTE low band operations.",
"title": ""
},
{
"docid": "d13145bc68472ed9a06bafd86357c5dd",
"text": "Modeling cloth with fiber-level geometry can produce highly realistic details. However, rendering fiber-level cloth models not only has a high memory cost but it also has a high computation cost even for offline rendering applications. In this paper we present a real-time fiber-level cloth rendering method for current GPUs. Our method procedurally generates fiber-level geometric details on-the-fly using yarn-level control points for minimizing the data transfer to the GPU. We also reduce the rasterization operations by collectively representing the fibers near the center of each ply that form the yarn structure. Moreover, we employ a level-of-detail strategy to minimize or completely eliminate the generation of fiber-level geometry that would have little or no impact on the final rendered image. Furthermore, we introduce a simple yarn-level ambient occlusion approximation and self-shadow computation method that allows lighting with self-shadows using relatively low-resolution shadow maps. We demonstrate the effectiveness of our approach by comparing our simplified fiber geometry to procedurally generated references and display knitwear containing more than a hundred million individual fiber curves at real-time frame rates with shadows and ambient occlusion.",
"title": ""
},
{
"docid": "a4fb1919a1bf92608a55bc3feedf897d",
"text": "We develop an algebraic framework, Logic Programming Doctrines, for the syntax, proof theory, operational semantics and model theory of Horn Clause logic programming based on indexed premonoidal categories. Our aim is to provide a uniform framework for logic programming and its extensions capable of incorporating constraints, abstract data types, features imported from other programming language paradigms and a mathematical description of the state space in a declarative manner. We define a new way to embed information about data into logic programming derivations by building a sketch-like description of data structures directly into an indexed category of proofs. We give an algebraic axiomatization of bottom-up semantics in this general setting, describing categorical models as fixed points of a continuous operator. © 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b14b36728c1775a8469bce1c42ce8783",
"text": "Inorganic scintillators are commonly used as sensors for ionizing radiation detectors in a variety of applications, ranging from particle and nuclear physics detectors, medical imaging, nuclear installations radiation control, homeland security, well oil logging and a number of industrial non-destructive investigations. For all these applications, the scintillation light produced by the energy deposited in the scintillator allows the determination of the position, the energy and the time of the event. However, the performance of these detectors is often limited by the amount of light collected on the photodetector. A major limitation comes from the fact that inorganic scintillators are generally characterized by a high refractive index, as a consequence of the required high density to provide the necessary stopping power for ionizing radiation. The index mismatch between the crystal and the surrounding medium (air or optical grease) strongly limits the light extraction efficiency because of total internal reflection (TIR), increasing the travel path and the absorption probability through multiple bouncings of the photons in the crystal. Photonic crystals can overcome this problem and produce a controllable index matching between the crystal and the output medium through an interface made of a thin nano-structured layer of optically-transparent high index material. This review presents a summary of the works aiming at improving the light collection efficiency of scintillators using photonic crystals since this idea was introduced 10 years ago.",
"title": ""
},
{
"docid": "3b78988b74c2e42827c9e75e37d2223e",
"text": "This paper addresses how to construct a RBAC-compatible attribute-based encryption (ABE) for secure cloud storage, which provides a user-friendly and easy-to-manage security mechanism without user intervention. Similar to role hierarchy in RBAC, attribute lattice introduced into ABE is used to define a seniority relation among all values of an attribute, whereby a user holding the senior attribute values acquires permissions of their juniors. Based on these notations, we present a new ABE scheme called Attribute-Based Encryption with Attribute Lattice (ABE-AL) that provides an efficient approach to implement comparison operations between attribute values on a poset derived from attribute lattice. By using bilinear groups of composite order, we propose a practical construction of ABE-AL based on forward and backward derivation functions. Compared with prior solutions, our scheme offers a compact policy representation solution, which can significantly reduce the size of privatekeys and ciphertexts. Furthermore, our solution provides a richer expressive power of access policies to facilitate flexible access control for ABE scheme.",
"title": ""
},
{
"docid": "a6fec60aeb6e5824ed07eaa3257969aa",
"text": "What aspects of information assurance can be identified in Business-to-Consumer (B-toC) online transactions? The purpose of this research is to build a theoretical framework for studying information assurance based on a detailed analysis of academic literature for online exchanges in B-to-C electronic commerce. Further, a semantic network content analysis is conducted to analyze the representations of information assurance in B-to-C electronic commerce in the real online market place (transaction Web sites of selected Fortune 500 firms). The results show that the transaction websites focus on some perspectives and not on others. For example, we see an emphasis on the importance of technological and consumer behavioral elements of information assurance such as issues of online security and privacy. Further corporate practitioners place most emphasis on transaction-related information assurance issues. Interestingly, the product and institutional dimension of information assurance in online transaction websites are only",
"title": ""
},
{
"docid": "581e3373ecfbc6c012df7c166636cc50",
"text": "The deep convolutional neural network(CNN) has significantly raised the performance of image classification and face recognition. Softmax is usually used as supervision, but it only penalizes the classification loss. In this paper, we propose a novel auxiliary supervision signal called contrastive-center loss, which can further enhance the discriminative power of the features, for it learns a class center for each class. The proposed contrastive-center loss simultaneously considers intra-class compactness and inter-class separability, by penalizing the contrastive values between: (1)the distances of training samples to their corresponding class centers, and (2)the sum of the distances of training samples to their non-corresponding class centers. Experiments on different datasets demonstrate the effectiveness of contrastive-center loss.",
"title": ""
}
] |
scidocsrr
|
a352dd701300a73364dde5029a62df2a
|
ReVision: automated classification, analysis and redesign of chart images
|
[
{
"docid": "98b30c5056d33f4f92bedc4f2e2698ce",
"text": "We present an approach for classifying images of charts based on the shape and spatial relationships of their primitives. Five categories are considered: bar-charts, curve-plots, pie-charts, scatter-plots and surface-plots. We introduce two novel features to represent the structural information based on (a) region segmentation and (b) curve saliency. The local shape is characterized using the Histograms of Oriented Gradients (HOG) and the Scale Invariant Feature Transform (SIFT) descriptors. Each image is represented by sets of feature vectors of each modality. The similarity between two images is measured by the overlap in the distribution of the features -measured using the Pyramid Match algorithm. A test image is classified based on its similarity with training images from the categories. The approach is tested with a database of images collected from the Internet.",
"title": ""
}
] |
[
{
"docid": "1cb2d77cbe4c164e0a9a9481cd268d01",
"text": "Visual analytics (VA) system development started in academic research institutions where novel visualization techniques and open source toolkits were developed. Simultaneously, small software companies, sometimes spin-offs from academic research institutions, built solutions for specific application domains. In recent years we observed the following trend: some small VA companies grew exponentially; at the same time some big software vendors such as IBM and SAP started to acquire successful VA companies and integrated the acquired VA components into their existing frameworks. Generally the application domains of VA systems have broadened substantially. This phenomenon is driven by the generation of more and more data of high volume and complexity, which leads to an increasing demand for VA solutions from many application domains. In this paper we survey a selection of state-of-the-art commercial VA frameworks, complementary to an existing survey on open source VA tools. From the survey results we identify several improvement opportunities as future research directions.",
"title": ""
},
{
"docid": "6033682cf01008f027877e3fda4511f8",
"text": "The HER-2/neu oncogene is a member of the erbB-like oncogene family, and is related to, but distinct from, the epidermal growth factor receptor. This gene has been shown to be amplified in human breast cancer cell lines. In the current study, alterations of the gene in 189 primary human breast cancers were investigated. HER-2/neu was found to be amplified from 2- to greater than 20-fold in 30% of the tumors. Correlation of gene amplification with several disease parameters was evaluated. Amplification of the HER-2/neu gene was a significant predictor of both overall survival and time to relapse in patients with breast cancer. It retained its significance even when adjustments were made for other known prognostic factors. Moreover, HER-2/neu amplification had greater prognostic value than most currently used prognostic factors, including hormonal-receptor status, in lymph node-positive disease. These data indicate that this gene may play a role in the biologic behavior and/or pathogenesis of human breast cancer.",
"title": ""
},
{
"docid": "13eaa316c8e41a9cc3807d60ba72db66",
"text": "This is a short paper introducing pitfalls when implementing averaged scores. Although, it is common to compute averaged scores, it is good to specify in detail how the scores are computed.",
"title": ""
},
{
"docid": "16d7767e9f2216ce0789b8a92d8d65e4",
"text": "In the rst genetic programming (GP) book John Koza noticed that tness histograms give a highly informative global view of the evolutionary process (Koza, 1992). The idea is further developed in this paper by discussing GP evolution in analogy to a physical system. I focus on three interrelated major goals: (1) Study the the problem of search eeort allocation in GP; (2) Develop methods in the GA/GP framework that allow adap-tive control of diversity; (3) Study ways of adaptation for faster convergence to optimal solution. An entropy measure based on phenotype classes is introduced which abstracts tness histograms. In this context, entropy represents a measure of population diversity. An analysis of entropy plots and their correlation with other statistics from the population enables an intelligent adaptation of search control.",
"title": ""
},
{
"docid": "2794ea63eb1a24ebd1cea052345569eb",
"text": "Ethernet is considered as a future communication standard for distributed embedded systems in the automotive and industrial domains. A key challenge is the deterministic low-latency transport of Ethernet frames, as many safety-critical real-time applications in these domains have tight timing requirements. Time-sensitive networking (TSN) is an upcoming set of Ethernet standards, which (among other things) address these requirements by specifying new quality of service mechanisms in the form of different traffic shapers. In this paper, we consider TSN's time-aware and peristaltic shapers and evaluate whether these shapers are able to fulfill these strict timing requirements. We present a formal timing analysis, which is a key requirement for the adoption of Ethernet in safety-critical real-time systems, to derive worst-case latency bounds for each shaper. We use a realistic automotive Ethernet setup to compare these shapers to each other and against Ethernet following IEEE 802.1Q.",
"title": ""
},
{
"docid": "c1dbf418f72ad572b3b745a94fe8fbf7",
"text": "In this work we show how to integrate prior statistical knowledge, obtained through principal components analysis (PCA), into a convolutional neural network in order to obtain robust predictions even when dealing with corrupted or noisy data. Our network architecture is trained end-to-end and includes a specifically designed layer which incorporates the dataset modes of variation discovered via PCA and produces predictions by linearly combining them. We also propose a mechanism to focus the attention of the CNN on specific regions of interest of the image in order to obtain refined predictions. We show that our method is effective in challenging segmentation and landmark localization tasks.",
"title": ""
},
{
"docid": "d46916f82e8f6ac8f4f3cb3df1c6875f",
"text": "Mobile devices are becoming the prevalent computing platform for most people. TouchDevelop is a new mobile development environment that enables anyone with a Windows Phone to create new apps directly on the smartphone, without a PC or a traditional keyboard. At the core is a new mobile programming language and editor that was designed with the touchscreen as the only input device in mind. Programs written in TouchDevelop can leverage all phone sensors such as GPS, cameras, accelerometer, gyroscope, and stored personal data such as contacts, songs, pictures. Thousands of programs have already been written and published with TouchDevelop.",
"title": ""
},
{
"docid": "ca17638b251d20cca2973a3f551b822f",
"text": "The first edition of Artificial Intelligence: A Modern Approach has become a classic in the AI literature. It has been adopted by over 600 universities in 60 countries, and has been praised as the definitive synthesis of the field. In the second edition, every chapter has been extensively rewritten. Significant new material has been introduced to cover areas such as constraint satisfaction, fast propositional inference, planning graphs, internet agents, exact probabilistic inference, Markov Chain Monte Carlo techniques, Kalman filters, ensemble learning methods, statistical learning, probabilistic natural language models, probabilistic robotics, and ethical aspects of AI. The book is supported by a suite of online resources including source code, figures, lecture slides, a directory of over 800 links to \"AI on the Web,\" and an online discussion group. All of this is available at: aima.cs.berkeley.edu.",
"title": ""
},
{
"docid": "12840153a7f2be146a482ed78e7822a6",
"text": "We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost. ! 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c5380f25f7b3005e8cbfceba9bb4bfa0",
"text": "We propose an event-driven model for headline generation. Given an input document, the system identifies a key event chain by extracting a set of structural events that describe them. Then a novel multi-sentence compression algorithm is used to fuse the extracted events, generating a headline for the document. Our model can be viewed as a novel combination of extractive and abstractive headline generation, combining the advantages of both methods using event structures. Standard evaluation shows that our model achieves the best performance compared with previous state-of-the-art systems.",
"title": ""
},
{
"docid": "20af5209de71897158820f935018d877",
"text": "This paper presents a new bag-of-entities representation for document ranking, with the help of modern knowledge bases and automatic entity linking. Our system represents query and documents by bag-of-entities vectors constructed from their entity annotations, and ranks documents by their matches with the query in the entity space. Our experiments with Freebase on TREC Web Track datasets demonstrate that current entity linking systems can provide sufficient coverage of the general domain search task, and that bag-of-entities representations outperform bag-of-words by as much as 18% in standard document ranking tasks.",
"title": ""
},
{
"docid": "988ce34190564babadb1e3b30a0d927c",
"text": "The kinetics of saccharose fermentation by Kombucha is not yet well defined due to lack of knowledge of reaction mechanisms taking place during this process. In this study, the kinetics of saccharose fermentation by Kombucha was analysed using the suggested empirical model. The data were obtained on 1.5 g L of black tea, with 66.47 g L of saccharose and using 10 or 15% (V/V) of Kombucha. The total number of viable cells was as follows: approximately 5×10 of yeast cells per mL of the inoculum and approximately 2x10 of bacteria cells per mL of the inoculum. The samples were analysed after 0, 3, 4, 5, 6, 7 and 10 days. Their pH values and contents of saccharose, glucose, fructose, total acids and ethanol were determined. A saccharose concentration model was defined as a sigmoidal function at 22 and 30 °C, and with 10 and 15% (V/V) of inoculum quantity. The determination coefficients of the functions were very high (R > 0.99). Reaction rates were calculated as first derivatives of Boltzmann’s functions. No simple correlation between the rate of reaction and independent variables (temperature and inoculum concentration) was found. Analysis of the empirical model indicated that saccharose fermentation by Kombucha occurred according to very complex kinetics.",
"title": ""
},
{
"docid": "ea64ba0b1c3d4ed506fb3605893fef92",
"text": "We explore frame-level audio feature learning for chord recognition using artificial neural networks. We present the argument that chroma vectors potentially hold enough information to model harmonic content of audio for chord recognition, but that standard chroma extractors compute too noisy features. This leads us to propose a learned chroma feature extractor based on artificial neural networks. It is trained to compute chroma features that encode harmonic information important for chord recognition, while being robust to irrelevant interferences. We achieve this by feeding the network an audio spectrum with context instead of a single frame as input. This way, the network can learn to selectively compensate noise and resolve harmonic ambiguities. We compare the resulting features to hand-crafted ones by using a simple linear frame-wise classifier for chord recognition on various data sets. The results show that the learned feature extractor produces superior chroma vectors for chord recognition.",
"title": ""
},
{
"docid": "58e0b66d55ca7f5571f4f55d8fcf822c",
"text": "Events of various kinds are mentioned and discussed in text documents, whether they are books, news articles, blogs or microblog feeds. The paper starts by giving an overview of how events are treated in linguistics and philosophy. We follow this discussion by surveying how events and associated information are handled in computationally. In particular, we look at how textual documents can be mined to extract events and ancillary information. These days, it is mostly through the application of various machine learning techniques. We also discuss applications of event detection and extraction systems, particularly in summarization, in the medical domain and in the context of Twitter posts. We end the paper with a discussion of challenges and future directions.",
"title": ""
},
{
"docid": "c6de5f33ca775fb42db4667b0dcc74bf",
"text": "Robotic-assisted laparoscopic prostatectomy is a surgical procedure performed to eradicate prostate cancer. Use of robotic assistance technology allows smaller incisions than the traditional laparoscopic approach and results in better patient outcomes, such as less blood loss, less pain, shorter hospital stays, and better postoperative potency and continence rates. This surgical approach creates unique challenges in patient positioning for the perioperative team because the patient is placed in the lithotomy with steep Trendelenburg position. Incorrect positioning can lead to nerve damage, pressure ulcers, and other complications. Using a special beanbag positioning device made specifically for use with this severe position helps prevent these complications.",
"title": ""
},
{
"docid": "81086098b7516e9f03559aa8b99df90e",
"text": "Abstractive text summarization aims to shorten long text documents into a human readable form that contains the most important facts from the original document. However, the level of actual abstraction as measured by novel phrases that do not appear in the source document remains low in existing approaches. We propose two techniques to improve the level of abstraction of generated summaries. First, we decompose the decoder into a contextual network that retrieves relevant parts of the source document, and a pretrained language model that incorporates prior knowledge about language generation. Second, we propose a novelty metric that is optimized directly through policy learning to encourage the generation of novel phrases. Our model achieves results comparable to state-of-the-art models, as determined by ROUGE scores and human evaluations, while achieving a significantly higher level of abstraction as measured by n-gram overlap with the source document.ive text summarization aims to shorten long text documents into a human readable form that contains the most important facts from the original document. However, the level of actual abstraction as measured by novel phrases that do not appear in the source document remains low in existing approaches. We propose two techniques to improve the level of abstraction of generated summaries. First, we decompose the decoder into a contextual network that retrieves relevant parts of the source document, and a pretrained language model that incorporates prior knowledge about language generation. Second, we propose a novelty metric that is optimized directly through policy learning to encourage the generation of novel phrases. Our model achieves results comparable to state-of-the-art models, as determined by ROUGE scores and human evaluations, while achieving a significantly higher level of abstraction as measured by n-gram overlap with the source document.",
"title": ""
},
{
"docid": "befbfb5b083cddb7fb43ebaa8df244c1",
"text": "The aim of this study was to adapt and validate the Spanish version of the Sport Motivation Scale-II (S-SMS-II) in adolescent athletes. The sample included 766 Spanish adolescents (263 females and 503 males; average age = 13.71 ± 1.30 years old). The methodological steps established by the International Test Commission were followed. Four measurement models were compared employing the maximum likelihood estimation (with six, five, three, and two factors). Then, factorial invariance analyses were conducted and the effect sizes were calculated. Finally, the reliability was calculated using Cronbach's alpha, omega, and average variance extracted coefficients. The five-factor S-SMS-II showed the best indices of fit (Cronbach's alpha .64 to .74; goodness of fit index .971, root mean square error of approximation .044, comparative fit index .966). Factorial invariance was also verified across gender and between sport-federated athletes and non-federated athletes. The proposed S-SMS-II is discussed according to previous validated versions (English, Portuguese, and Chinese).",
"title": ""
},
{
"docid": "021789cea259697f236986028218e3f6",
"text": "In the IT world of corporate networking, how businesses store and compute data is starting to shift from in-house servers to the cloud. However, some enterprises are still hesitant to make this leap to the cloud because of their information security and data privacy concerns. Enterprises that want to invest into this service need to feel confident that the information stored on the cloud is secure. Due to this need for confidence, trust is one of the major qualities that cloud service providers (CSPs) must build for cloud service users (CSUs). To do this, a model that all CSPs can follow must exist to establish a trust standard in the industry. If no concrete model exists, the future of cloud computing will be stagnant. This paper presents a new trust model that involves all the cloud stakeholders such as CSU, CSP, and third-party auditors. Our proposed trust model is objective since it involves third-party auditors to develop unbiased trust between the CSUs and the CSPs. Furthermore, to support the implementation of the proposed trust model, we rank CSPs according to the trust-values obtained from the trust model. The final score for each participating CSP will be determined based on the third-party assessment and the feedback received from the CSUs.",
"title": ""
},
{
"docid": "67b41c7c37f0e497d2019399c0a87af9",
"text": "RAYNAUD’S Disease is a vasospastic disorder affecting primarily the distal resistance vessels. The disease is typically characterized by the abrupt onset of digital pallor or cyanosis in response to cold exposure or stress. Raynaud’s Disease may occur independently or be associated with other conditions (systemic lupus erythematosus and scleroderma) and connective tissue diseases. Initial symptoms may include a burning sensation in the affected area accompanied by allodynia and painful paresthesias with vasomotor (cold, cyanotic) changes. Ultimately, as the ischemia becomes more chronic, this condition may progress to amputation of the affected digits. The most common indication for spinal cord stimulation in the United States is for chronic painful neuropathies. However, in Europe, spinal cord stimulation is frequently used to treat ischemic conditions, such as peripheral vascular disease and coronary occlusive disease. Although technically an off-label indication in the United States, this practice is supported by many published studies. There have also been case reports of its use in other diseases resulting in arterial insufficiency to the extremities, such as thromboangiitis obliterans (Buerger’s Disease), but its use in Raynaud’s Disease is relatively underreported. This case describes the use of cervical spinal cord stimulation to treat refractory digital ischemia in a patient with advanced Raynaud’s Disease.",
"title": ""
},
{
"docid": "6c97853046dd2673d9c83990119ef43c",
"text": "Atomic actions (or transactions) are useful for coping with concurrency and failures. One way of ensuring atomicity of actions is to implement applications in terms of atomic data types: abstract data types whose objects ensure serializability and recoverability of actions using them. Many atomic types can be implemented to provide high levels of concurrency by taking advantage of algebraic properties of the type's operations, for example, that certain operations commute. In this paper we analyze the level of concurrency permitted by an atomic type. We introduce several local constraints on individual objects that suffice to ensure global atomicity of actions; we call these constraints local atomicity properties. We present three local atomicity properties, each of which is optimal: no strictly weaker local constraint on objects suffices to ensure global atomicity for actions. Thus, the local atomicity properties define precise limits on the amount of concurrency that can be permitted by an atomic type.",
"title": ""
}
] |
scidocsrr
|
912991cba9804e1d19cdac74ab16bdd1
|
Sliding-mode controller for four-wheel-steering vehicle: Trajectory-tracking problem
|
[
{
"docid": "6fdee3d247a36bc7d298a7512a11118a",
"text": "Fully automatic driving is emerging as the approach to dramatically improve efficiency (throughput per unit of space) while at the same time leading to the goal of zero accidents. This approach, based on fully automated vehicles, might improve the efficiency of road travel in terms of space and energy used, and in terms of service provided as well. For such automated operation, trajectory planning methods that produce smooth trajectories, with low level associated accelerations and jerk for providing human comfort, are required. This paper addresses this problem proposing a new approach that consists of introducing a velocity planning stage in the trajectory planner. Moreover, this paper presents the design and simulation evaluation of trajectory-tracking and path-following controllers for autonomous vehicles based on sliding mode control. A new design of sliding surface is proposed, such that lateral and angular errors are internally coupled with each other (in cartesian space) in a sliding surface leading to convergence of both variables.",
"title": ""
}
] |
[
{
"docid": "2c91e6ca6cf72279ad084c4a51b27b1c",
"text": "Knowing where the host lane lies is paramount to the effectiveness of many advanced driver assistance systems (ADAS), such as lane keep assist (LKA) and adaptive cruise control (ACC). This paper presents an approach for improving lane detection based on the past trajectories of vehicles. Instead of expensive high-precision map, we use the vehicle trajectory information to provide additional lane-level spatial support of the traffic scene, and combine it with the visual evidence to improve each step of the lane detection procedure, thereby overcoming typical challenges of normal urban streets. Such an approach could serve as an Add-On to enhance the performance of existing lane detection systems in terms of both accuracy and robustness. Experimental results in various typical but challenging scenarios show the effectiveness of the proposed system.",
"title": ""
},
{
"docid": "46360fec3d7fa0adbe08bb4b5bb05847",
"text": "Previous approaches to action recognition with deep features tend to process video frames only within a small temporal region, and do not model long-range dynamic information explicitly. However, such information is important for the accurate recognition of actions, especially for the discrimination of complex activities that share sub-actions, and when dealing with untrimmed videos. Here, we propose a representation, VLAD for Deep Dynamics (VLAD3), that accounts for different levels of video dynamics. It captures short-term dynamics with deep convolutional neural network features, relying on linear dynamic systems (LDS) to model medium-range dynamics. To account for long-range inhomogeneous dynamics, a VLAD descriptor is derived for the LDS and pooled over the whole video, to arrive at the final VLAD3 representation. An extensive evaluation was performed on Olympic Sports, UCF101 and THUMOS15, where the use of the VLAD3 representation leads to state-of-the-art results.",
"title": ""
},
{
"docid": "363c1ecd086043311f16b53b20778d51",
"text": "One recent development of cultural globalization emerges in the convergence of taste in media consumption within geo-cultural regions, such as Latin American telenovelas, South Asian Bollywood films and East Asian trendy dramas. Originating in Japan, the so-called trendy dramas (or idol dramas) have created a craze for Japanese commodities in its neighboring countries (Ko, 2004). Following this Japanese model, Korea has also developed as a stronghold of regional exports, ranging from TV programs, movies and pop music to food, fashion and tourism. The fondness for all things Japanese and Korean in East Asia has been vividly captured by such buzz phrases as Japan-mania (hari in Chinese) and the Korean wave (hallyu in Korean and hanliu in Chinese). These two phenomena underscore how popular culture helps polish the image of a nation and thus strengthens its economic competitiveness in the global market. Consequently, nationbranding has become incorporated into the project of nation-building in light of globalization. However, Japan’s cultural spread and Korea’s cultural expansion in East Asia are often analysed from angles that are polar opposites. Scholars suggest that Japan-mania is initiated by the ardent consumers of receiving countries (Nakano, 2002), while the Korea wave is facilitated by the Korean state in order to boost its culture industry (Ryoo, 2008). Such claims are legitimate but neglect the analogues of these two phenomena. This article examines the parallel paths through which Japan-mania and the Korean wave penetrate into people’s everyday practices in Taiwan – arguably one of the first countries to be swept by these two trends. My aim is to illuminate the processes in which nation-branding is not only promoted by a nation as an international marketing strategy, but also appropriated by a receiving country as a pattern of consumption. Three seemingly contradictory arguments explain why cultural products ‘sell’ across national borders: cultural transparency, cultural difference and hybridization. First, cultural exports targeting the global market are rarely culturally specific so that they allow worldwide audiences to ‘project [into them] indigenous values, beliefs, rites, and rituals’ Media, Culture & Society 33(1) 3 –18 © The Author(s) 2011 Reprints and permission: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0163443710379670 mcs.sagepub.com",
"title": ""
},
{
"docid": "72a1798a864b4514d954e1e9b6089ad8",
"text": "Clustering image pixels is an important image segmentation technique. While a large amount of clustering algorithms have been published and some of them generate impressive clustering results, their performance often depends heavily on user-specified parameters. This may be a problem in the practical tasks of data clustering and image segmentation. In order to remove the dependence of clustering results on user-specified parameters, we investigate the characteristics of existing clustering algorithms and present a parameter-free algorithm based on the DSets (dominant sets) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithms. First, we apply histogram equalization to the pairwise similarity matrix of input data and make DSets clustering results independent of user-specified parameters. Then, we extend the clusters from DSets with DBSCAN, where the input parameters are determined based on the clusters from DSets automatically. By merging the merits of DSets and DBSCAN, our algorithm is able to generate the clusters of arbitrary shapes without any parameter input. In both the data clustering and image segmentation experiments, our parameter-free algorithm performs better than or comparably with other algorithms with careful parameter tuning.",
"title": ""
},
{
"docid": "01edfc6eb157dc8cf2642f58cf3aba25",
"text": "Understanding developmental processes, especially in non-model crop plants, is extremely important in order to unravel unique mechanisms regulating development. Chickpea (C. arietinum L.) seeds are especially valued for their high carbohydrate and protein content. Therefore, in order to elucidate the mechanisms underlying seed development in chickpea, deep sequencing of transcriptomes from four developmental stages was undertaken. In this study, next generation sequencing platform was utilized to sequence the transcriptome of four distinct stages of seed development in chickpea. About 1.3 million reads were generated which were assembled into 51,099 unigenes by merging the de novo and reference assemblies. Functional annotation of the unigenes was carried out using the Uniprot, COG and KEGG databases. RPKM based digital expression analysis revealed specific gene activities at different stages of development which was validated using Real time PCR analysis. More than 90% of the unigenes were found to be expressed in at least one of the four seed tissues. DEGseq was used to determine differentially expressing genes which revealed that only 6.75% of the unigenes were differentially expressed at various stages. Homology based comparison revealed 17.5% of the unigenes to be putatively seed specific. Transcription factors were predicted based on HMM profiles built using TF sequences from five legume plants and analyzed for their differential expression during progression of seed development. Expression analysis of genes involved in biosynthesis of important secondary metabolites suggested that chickpea seeds can serve as a good source of antioxidants. Since transcriptomes are a valuable source of molecular markers like simple sequence repeats (SSRs), about 12,000 SSRs were mined in chickpea seed transcriptome and few of them were validated. In conclusion, this study will serve as a valuable resource for improved chickpea breeding.",
"title": ""
},
{
"docid": "b4978b2fbefc79fba6e69ad8fd55ebf9",
"text": "This paper proposes an approach based on Least Squares Suppo rt Vect r Machines (LS-SVMs) for solving second order parti al differential equations (PDEs) with variable coe fficients. Contrary to most existing techniques, the proposed m thod provides a closed form approximate solution. The optimal representat ion of the solution is obtained in the primal-dual setting. T he model is built by incorporating the initial /boundary conditions as constraints of an optimization prob lem. The developed method is well suited for problems involving singular, variable and const a t coefficients as well as problems with irregular geometrical domai ns. Numerical results for linear and nonlinear PDEs demonstrat e he efficiency of the proposed method over existing methods.",
"title": ""
},
{
"docid": "9c349ef0f3a48eaeaf678b8730d4b82c",
"text": "This paper discusses the effectiveness of the EEG signal for human identification using four or less of channels of two different types of EEG recordings. Studies have shown that the EEG signal has biometric potential because signal varies from person to person and impossible to replicate and steal. Data were collected from 10 male subjects while resting with eyes open and eyes closed in 5 separate sessions conducted over a course of two weeks. Features were extracted using the wavelet packet decomposition and analyzed to obtain the feature vectors. Subsequently, the neural networks algorithm was used to classify the feature vectors. Results show that, whether or not the subjects’ eyes were open are insignificant for a 4– channel biometrics system with a classification rate of 81%. However, for a 2–channel system, the P4 channel should not be included if data is acquired with the subjects’ eyes open. It was observed that for 2– channel system using only the C3 and C4 channels, a classification rate of 71% was achieved. Keywords—Biometric, EEG, Wavelet Packet Decomposition, Neural Networks",
"title": ""
},
{
"docid": "de2ed315762d3f0ac34fe0b77567b3a2",
"text": "A study in vitro of specimens of human aortic and common carotid arteries was carried out to determine the feasibility of direct measurement (i.e., not from residual lumen) of arterial wall thickness with B mode real-time imaging. Measurements in vivo by the same technique were also obtained from common carotid arteries of 10 young normal male subjects. Aortic samples were classified as class A (relatively normal) or class B (with one or more atherosclerotic plaques). In all class A and 85% of class B arterial samples a characteristic B mode image composed of two parallel echogenic lines separated by a hypoechoic space was found. The distance between the two lines (B mode image of intimal + medial thickness) was measured and correlated with the thickness of different combinations of tunicae evaluated by gross and microscopic examination. On the basis of these findings and the results of dissection experiments on the intima and adventitia we concluded that results of B mode imaging of intimal + medial thickness did not differ significantly from the intimal + medial thickness measured on pathologic examination. With respect to the accuracy of measurements obtained by B mode imaging as compared with pathologic findings, we found an error of less than 20% for measurements in 77% of normal and pathologic aortic walls. In addition, no significant difference was found between B mode-determined intimal + medial thickness in the common carotid arteries evaluated in vitro and that determined by this method in vivo in young subjects, indicating that B mode imaging represents a useful approach for the measurement of intimal + medial thickness of human arteries in vivo.",
"title": ""
},
{
"docid": "67dedca1dbdf5845b32c74e17fc42eb6",
"text": "How much trust a user places in a recommender is crucial to the uptake of the recommendations. Although prior work established various factors that build and sustain user trust, their comparative impact has not been studied in depth. This paper presents the results of a crowdsourced study examining the impact of various recommendation interfaces and content selection strategies on user trust. It evaluates the subjective ranking of nine key factors of trust grouped into three dimensions and examines the differences observed with respect to users' personality traits.",
"title": ""
},
{
"docid": "fec3feb40d363535955a9ac4234c4126",
"text": "This article presents metrics from two Hewlett-Packard (HP) reuse programs that document the improved quality, increased productivity, shortened time-to-market, and enhanced economics resulting from reuse. Work products are the products or by-products of the software-development process: for example, code, design, and test plans. Reuse is the use of these work products without modification in the development of other software. Leveraged reuse is modifying existing work products to meet specific system requirements. A producer is a creator of reusable work products, and the consumer is someone who uses them to create other software. Time-to-market is the time it takes to deliver a product from the time it is conceived. Experience with reuse has been largely positive. Because work products are used multiple times, the accumulated defect fixes result in a higher quality work product. Because the work products have already been created, tested, and documented, productivity increases because consumers of reusable work products need to do less work. However, increased productivity from reuse does not necessarily shorten time-to-market. To reduce time-to-market, reuse must be used effectively on the critical path of a development project. Finally, we have found that reuse allows an organization to use personnel more effectively because it leverages expertise. However, software reuse is not free. It requires resources to create and maintain reusable work products, a reuse library, and reuse tools. To help evaluate the costs and benefits of reuse, we have developed an economic analysis method, which we have applied to multiple reuse programs at HP.<<ETX>>",
"title": ""
},
{
"docid": "bb13ad5b41abbf80f7e7c70a9098cd15",
"text": "OBJECTIVE\nThis study assessed the psychological distress in Spanish college women and analyzed it in relation to sociodemographic and academic factors.\n\n\nPARTICIPANTS AND METHODS\nThe authors selected a stratified random sampling of 1,043 college women (average age of 22.2 years). Sociodemographic and academic information were collected, and psychological distress was assessed with the Symptom Checklist-90-Revised.\n\n\nRESULTS\nThis sample of college women scored the highest on the depression dimension and the lowest on the phobic anxiety dimension. The sample scored higher than women of the general population on the dimensions of obsessive-compulsive, interpersonal sensitivity, paranoid ideation, psychoticism, and on the Global Severity Index. Scores in the sample significantly differed based on age, relationship status, financial independence, year of study, and area of study.\n\n\nCONCLUSION\nThe results indicated an elevated level of psychological distress among college women, and therefore college health services need to devote more attention to their mental health.",
"title": ""
},
{
"docid": "69d32f5e6a6612770cd50b20e5e7f802",
"text": "In this paper we present an approach for efficiently retrieving the most similar image, based on point-to-point correspondences, within a sequence that has been acquired through continuous camera movement. Our approach is entailed to the use of standardized binary feature descriptors and exploits the temporal form of the input data to dynamically adapt the search structure. While being straightforward to implement, our method exhibits very fast response times and its Precision/Recall rates compete with state of the art approaches. Our claims are supported by multiple large scale experiments on publicly available datasets.",
"title": ""
},
{
"docid": "6e00567c5c33d899af9b5a67e37711a3",
"text": "The adoption of cloud computing facilities and programming models differs vastly between different application domains. Scalable web applications, low-latency mobile backends and on-demand provisioned databases are typical cases for which cloud services on the platform or infrastructure level exist and are convincing when considering technical and economical arguments. Applications with specific processing demands, including high-performance computing, high-throughput computing and certain flavours of scientific computing, have historically required special configurations such as computeor memory-optimised virtual machine instances. With the rise of function-level compute instances through Function-as-a-Service (FaaS) models, the fitness of generic configurations needs to be re-evaluated for these applications. We analyse several demanding computing tasks with regards to how FaaS models compare against conventional monolithic algorithm execution. Beside the comparison, we contribute a refined FaaSification process for legacy software and provide a roadmap for future work. 1 Research Direction The ability to turn programmed functions or methods into ready-to-use cloud services is leading to a seemingly serverless development and deployment experience for application software engineers [1]. Without the necessity to allocate resources beforehand, prototyping new features and workflows becomes faster and more convenient to application service providers. These advantages have given boost to an industry trend consequently called Serverless Computing. The more precise, almost overlapping term in accordance with Everything-asa-Service (XaaS) cloud computing taxonomies is Function-as-a-Service (FaaS) [4]. In the FaaS layer, functions, either on the programming language level or as abstract concept around binary implementations, are executed synchronously or asynchronously through multi-protocol triggers. Function instances are provisioned on demand through coldstart or warmstart of the implementation in conjunction with an associated configuration in few milliseconds, elastically scaled as needed, and charged per invocation and per product of period of time and resource usage, leading to an almost perfect pay-as-you-go utility pricing model [11]. FaaS is gaining traction primarily in three areas. First, in Internet-of-Things applications where connected devices emit data sporadically. Second, for web applications with light-weight backend tasks. Third, as glue code between other cloud computing services. In contrast to the industrial popularity, no work is known to us which explores its potential for scientific and high-performance computing applications with more demanding execution requirements. From a cloud economics and strategy perspective, FaaS is a refinement of the platform layer (PaaS) with particular tools and interfaces. Yet from a software engineering and deployment perspective, functions are complementing other artefact types which are deployed into PaaS or underlying IaaS environments. Fig. 1 explains this positioning within the layered IaaS, PaaS and SaaS service classes, where the FaaS runtime itself is subsumed under runtime stacks. Performing experimental or computational science research with FaaS implies that the two roles shown, end user and application engineer, are adopted by a single researcher or a team of researchers, which is the setting for our research. Fig. 1. Positioning of FaaS in cloud application development The necessity to conduct research on FaaS for further application domains stems from the unique execution characteristics. Service instances are heuristically stateless, ephemeral, and furthermore limited in resource allotment and execution time. They are moreover isolated from each other and from the function management and control plane. In public commercial offerings, they are billed in subsecond intervals and terminated after few minutes, but as with any cloud application, private deployments are also possible. Hence, there is a trade-off between advantages and drawbacks which requires further analysis. For example, existing parallelisation frameworks cannot easily be used at runtime as function instances can only, in limited ways, invoke other functions without the ability to configure their settings. Instead, any such parallelisation needs to be performed before deployment with language-specific tools such as Pydron for Python [10] or Calvert’s compiler for Java [3]. For resourceand time-demanding applications, no special-purpose FaaS instances are offered by commercial cloud providers. This is a surprising observation given the multitude of options in other cloud compute services beyond general-purpose offerings, especially on the infrastructure level (IaaS). These include instance types optimised for data processing (with latest-generation processors and programmable GPUs), for memory allocation, and for non-volatile storage (with SSDs). Amazon Web Services (AWS) alone offers 57 different instance types. Our work is therefore concerned with the assessment of how current generic one-size-fits-all FaaS offerings handle scientific computing workloads, whether the proliferation of specialised FaaS instance types can be expected and how they would differ from commonly offered IaaS instance types. In this paper, we contribute specifically (i) a refined view on how software can be made fitting into special-purpose FaaS contexts with a high degree of automation through a process named FaaSification, and (ii) concepts and tools to execute such functions in constrained environments. In the remainder of the paper, we first present background information about FaaS runtimes, including our own prototypes which allow for providerindependent evaluations. Subsequently, we present four domain-specific scientific experiments conducted using FaaS to gain broad knowledge about resource requirements beyond general-purpose instances. We summarise the findings and reason about the implications for future scientific computing infrastructures. 2 Background on Function-as-a-Service 2.1 Programming Models and Runtimes The characteristics of function execution depend primarily on the FaaS runtime in use. There are broadly three categories of runtimes: 1. Proprietary commercial services, such as AWS Lambda, Google Cloud Functions, Azure Functions and Oracle Functions. 2. Open source alternatives with almost matching interfaces and functionality, such as Docker-LambCI, Effe, Google Cloud Functions Emulator and OpenLambda [6], some of which focus on local testing rather than operation. 3. Distinct open source implementations with unique designs, such as Apache OpenWhisk, Kubeless, IronFunctions and Fission, some of which are also available as commercial services, for instance IBM Bluemix OpenWhisk [5]. The uniqueness is a consequence of the integration with other cloud stacks (Kubernetes, OpenStack), the availability of web and command-line interfaces, the set of triggers and the level of isolation in multi-tenant operation scenarios, which is often achieved through containers. In addition, due to the often non-trivial configuration of these services, a number of mostly service-specific abstraction frameworks have become popular among developers, such as PyWren, Chalice, Zappa, Apex and the Serverless Framework [8]. The frameworks and runtimes differ in their support for programming languages, but also in the function signatures, parameters and return values. Hence, a comparison of the entire set of offerings requires a baseline. The research in this paper is congruously conducted with the mentioned commercial FaaS providers as well as with our open-source FaaS tool Snafu which allows for managing, executing and testing functions across provider-specific interfaces [14]. The service ecosystem relationship between Snafu and the commercial FaaS providers is shown in Fig. 2. Snafu is able to import services from three providers (AWS Lambda, IBM Bluemix OpenWhisk, Google Cloud Functions) and furthermore offers a compatible control plane to all three of them in its current implementation version. At its core, it contains a modular runtime environment with prototypical maturity for functions implemented in JavaScript, Java, Python and C. Most importantly, it enables repeatable research as it can be deployed as a container, in a virtual machine or on a bare metal workstation. Notably absent from the categories above are FaaS offerings in e-science infrastructures and research clouds, despite the programming model resembling widely used job submission systems. We expect our practical research contributions to overcome this restriction in a vendor-independent manner. Snafu, for instance, is already available as an alpha-version launch profile in the CloudLab testbed federated across several U.S. installations with a total capacity of almost 15000 cores [12], as well as in EGI’s federated cloud across Europe. Fig. 2. Snafu and its ecosystem and tooling Using Snafu, it is possible to adhere to the diverse programming conventions and execution conditions at commercial services while at the same time controlling and lifting the execution restrictions as necessary. In particular, it is possible to define memory-optimised, storage-optimised and compute-optimised execution profiles which serve to conduct the anticipated research on generic (general-purpose) versus specialised (special-purpose) cloud offerings for scientific computing. Snafu can execute in single process mode as well as in a loadbalancing setup where each request is forwarded by the master instance to a slave instance which in turn executes the function natively, through a languagespecific interpreter or through a container. Table 1 summarises the features of selected FaaS runtimes. Table 1. FaaS runtimes and their features Runtime Languages Programming model Import/Export AWS Lambda JavaScript, Python, Java, C# Lambda – Google Cloud Functions JavaScrip",
"title": ""
},
{
"docid": "057621c670a9b7253ba829210c530dca",
"text": "Actual challenges in production are individualization and short product lifecycles. To achieve this, the product development and the production planning must be accelerated. In some cases specialized production machines are engineered for automating production processes for a single product. Regarding the engineering of specialized production machines, there is often a sequential process starting with the mechanics, proceeding with the electrics and ending with the automation design. To accelerate this engineering process the different domains have to be parallelized as far as possible (Schlögl, 2008). Thereby the different domains start detailing in parallel after the definition of a common concept. The system integration follows the detailing with the objective to verify the system including the PLC-code. Regarding production machines, the system integration is done either by commissioning of the real machine or by validating the PLCcode against a model of the machine, so called virtual commissioning.",
"title": ""
},
{
"docid": "a6499aad878777373006742778145ddb",
"text": "The very term 'Biotechnology' elicits a range of emotions, from wonder and awe to downright fear and hostility. This is especially true among non-scientists, particularly in respect of agricultural and food biotechnology. These emotions indicate just how poorly understood agricultural biotechnology is and the need for accurate, dispassionate information in the public sphere to allow a rational public debate on the actual, as opposed to the perceived, risks and benefits of agricultural biotechnology. This review considers first the current state of public knowledge on agricultural biotechnology, and then explores some of the popular misperceptions and logical inconsistencies in both Europe and North America. I then consider the problem of widespread scientific illiteracy, and the role of the popular media in instilling and perpetuating misperceptions. The impact of inappropriate efforts to provide 'balance' in a news story, and of belief systems and faith also impinges on public scientific illiteracy. Getting away from the abstract, we explore a more concrete example of the contrasting approach to agricultural biotechnology adoption between Europe and North America, in considering divergent approaches to enabling coexistence in farming practices. I then question who benefits from agricultural biotechnology. Is it only the big companies, or is it society at large--and the environment--also deriving some benefit? Finally, a crucial aspect in such a technologically complex issue, ordinary and intelligent non-scientifically trained consumers cannot be expected to learn the intricacies of the technology to enable a personal choice to support or reject biotechnology products. The only reasonable and pragmatic alternative is to place trust in someone to provide honest advice. But who, working in the public interest, is best suited to provide informed and accessible, but objective, advice to wary consumers?",
"title": ""
},
{
"docid": "b86ab15486581bbf8056e4f1d30eb4e5",
"text": "Existing peer-to-peer publish-subscribe systems rely on structured-overlays and rendezvous nodes to store and relay group membership information. While conceptually simple, this design incurs the significant cost of creating and maintaining rigid-structures and introduces hotspots in the system at nodes that are neither publishers nor subscribers. In this paper, we introduce Quasar, a rendezvous-less probabilistic publish-subscribe system that caters to the specific needs of social networks. It is designed to handle social networks of many groups; on the order of the number of users in the system. It creates a routing infrastructure based on the proactive dissemination of highly aggregated routing vectors to provide anycast-like directed walks in the overlay. This primitive, when coupled with a novel mechanism for dynamically negating routes, enables scalable and efficient group-multicast that obviates the need for structure and rendezvous nodes. We examine the feasibility of this approach and show in a large-scale simulation that the system is scalable and efficient.",
"title": ""
},
{
"docid": "e2f6cd2a6b40c498755e0daf98cead19",
"text": "According to an estimate several billion smart devices will be connected to the Internet by year 2020. This exponential increase in devices is a challenge to the current Internet architecture, where connectivity is based on host-to-host communication. Information-Centric Networking is a novel networking paradigm in which data is addressed by its name instead of location. Several ICN architecture proposals have emerged from research communities to address challenges introduced by the current Internet Protocol (IP) regarding e.g. scalability. Content-Centric Networking (CCN) is one of the proposals. In this paper we present a way to use CCN in an Internet of Things (IoT) context. We quantify the benefits from hierarchical content naming, transparent in-network caching and other information-centric networking characteristics in a sensor environment. As a proof of concept we implemented a presentation bridge for a home automation system that provides services to the network through CCN.",
"title": ""
},
{
"docid": "3a314a72ea2911844a5a3462d052f4e7",
"text": "While increasing income inequality in China has been commented on and studied extensively, relatively little analysis is available on inequality in other dimensions of human development. Using data from different sources, this paper presents some basic facts on the evolution of spatial inequalities in education and healthcare in China over the long run. In the era of economic reforms, as the foundations of education and healthcare provision have changed, so has the distribution of illiteracy and infant mortality. Across provinces and within provinces, between rural and urban areas and within rural and urban areas, social inequalities have increased substantially since the reforms began.",
"title": ""
},
{
"docid": "6d41b17506d0e8964f850c065b9286cb",
"text": "Representation learning is a key issue for most Natural Language Processing (NLP) tasks. Most existing representation models either learn little structure information or just rely on pre-defined structures, leading to degradation of performance and generalization capability. This paper focuses on learning both local semantic and global structure representations for text classification. In detail, we propose a novel Sandwich Neural Network (SNN) to learn semantic and structure representations automatically without relying on parsers. More importantly, semantic and structure information contribute unequally to the text representation at corpus and instance level. To solve the fusion problem, we propose two strategies: Adaptive Learning Sandwich Neural Network (AL-SNN) and Self-Attention Sandwich Neural Network (SA-SNN). The former learns the weights at corpus level, and the latter further combines attention mechanism to assign the weights at instance level. Experimental results demonstrate that our approach achieves competitive performance on several text classification tasks, including sentiment analysis, question type classification and subjectivity classification. Specifically, the accuracies are MR (82.1%), SST-5 (50.4%), TREC (96%) and SUBJ (93.9%).",
"title": ""
},
{
"docid": "06f1c7daafcf59a8eb2ddf430d0d7f18",
"text": "OBJECTIVES\nWe aimed to evaluate the efficacy of reinforcing short-segment pedicle screw fixation with polymethyl methacrylate (PMMA) vertebroplasty in patients with thoracolumbar burst fractures.\n\n\nMETHODS\nWe enrolled 70 patients with thoracolumbar burst fractures for treatment with short-segment pedicle screw fixation. Fractures in Group A (n = 20) were reinforced with PMMA vertebroplasty during surgery. Group B patients (n = 50) were not treated with PMMA vertebroplasty. Kyphotic deformity, anterior vertebral height, instrument failure rates, and neurological function outcomes were compared between the two groups.\n\n\nRESULTS\nKyphosis correction was achieved in Group A (PMMA vertebroplasty) and Group B (Group A, 6.4 degrees; Group B, 5.4 degrees). At the end of the follow-up period, kyphosis correction was maintained in Group A but lost in Group B (Group A, 0.33-degree loss; Group B, 6.20-degree loss) (P = 0.0001). After surgery, greater anterior vertebral height was achieved in Group A than in Group B (Group A, 12.9%; Group B, 2.3%) (P < 0.001). During follow-up, anterior vertebral height was maintained only in Group A (Group A, 0.13 +/- 4.06%; Group B, -6.17 +/- 1.21%) (P < 0.001). Patients in both Groups A and B demonstrated good postoperative Denis Pain Scale grades (P1 and P2), but Group A had better results than Group B in terms of the control of severe and constant pain (P4 and P5) (P < 0.001). The Frankel Performance Scale scores increased by nearly 1 in both Groups A and B. Group B was subdivided into Group B1 and B2. Group B1 consisted of patients who experienced instrument failure, including screw pullout, breakage, disconnection, and dislodgement (n = 11). Group B2 comprised patients from Group B who did not experience instrument failure (n = 39). There were no instrument failures among patients in Group A. Preoperative kyphotic deformity was greater in Group B1 (23.5 +/- 7.9 degrees) than in Group B2 (16.8 +/- 8.40 degrees), P < 0.05. Severe and constant pain (P4 and P5) was noted in 36% of Group B1 patients (P < 0.001), and three of these patients required removal of their implants.\n\n\nCONCLUSION\nReinforcement of short-segment pedicle fixation with PMMA vertebroplasty for the treatment of patients with thoracolumbar burst fracture may achieve and maintain kyphosis correction, and it may also increase and maintain anterior vertebral height. Good Denis Pain Scale grades and improvement in Frankel Performance Scale scores were found in patients without instrument failure (Groups A and B2). Patients with greater preoperative kyphotic deformity had a higher risk of instrument failure if they did not undergo reinforcement with vertebroplasty. PMMA vertebroplasty offers immediate spinal stability in patients with thoracolumbar burst fractures, decreases the instrument failure rate, and provides better postoperative pain control than without vertebroplasty.",
"title": ""
}
] |
scidocsrr
|
7441e5c76b17cf1f246c3efebf0dd644
|
PROBLEMS OF EMPLOYABILITY-A STUDY OF JOB – SKILL AND QUALIFICATION MISMATCH
|
[
{
"docid": "8e74a27a3edea7cf0e88317851bc15eb",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "c08e9731b9a1135b7fb52548c5c6f77e",
"text": "Many geometry processing applications, such as morphing, shape blending, transfer of texture or material properties, and fitting template meshes to scan data, require a bijective mapping between two or more models. This mapping, or cross-parameterization, typically needs to preserve the shape and features of the parameterized models, mapping legs to legs, ears to ears, and so on. Most of the applications also require the models to be represented by compatible meshes, i.e. meshes with identical connectivity, based on the cross-parameterization. In this paper we introduce novel methods for shape preserving cross-parameterization and compatible remeshing. Our cross-parameterization method computes a low-distortion bijective mapping between models that satisfies user prescribed constraints. Using this mapping, the remeshing algorithm preserves the user-defined feature vertex correspondence and the shape correlation between the models. The remeshing algorithm generates output meshes with significantly fewer elements compared to previous techniques, while accurately approximating the input geometry. As demonstrated by the examples, the compatible meshes we construct are ideally suitable for morphing and other geometry processing applications.",
"title": ""
},
{
"docid": "6b1e67c1768f9ec7a6ab95a9369b92d1",
"text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.",
"title": ""
},
{
"docid": "9c97a3ea2acfe09e3c60cbcfa35bab7d",
"text": "In comparison with document summarization on the articles from social media and newswire, argumentative zoning (AZ) is an important task in scientific paper analysis. Traditional methodology to carry on this task relies on feature engineering from different levels. In this paper, three models of generating sentence vectors for the task of sentence classification were explored and compared. The proposed approach builds sentence representations using learned embeddings based on neural network. The learned word embeddings formed a feature space, to which the examined sentence is mapped to. Those features are input into the classifiers for supervised classification. Using 10-cross-validation scheme, evaluation was conducted on the Argumentative-Zoning (AZ) annotated articles. The results showed that simply averaging the word vectors in a sentence works better than the paragraph to vector algorithm and by integrating specific cuewords into the loss function of the neural network can improve the classification performance. In comparison with the hand-crafted features, the word2vec method won for most of the categories. However, the hand-crafted features showed their strength on classifying some of the categories.",
"title": ""
},
{
"docid": "11e2ec2aab62ba8380e82a18d3fcb3d8",
"text": "In this paper we describe our effort to create a dataset for the evaluation of cross-language textual similarity detection. We present preexisting corpora and their limits and we explain the various gathered resources to overcome these limits and build our enriched dataset. The proposed dataset is multilingual, includes cross-language alignment for different granularities (from chunk to document), is based on both parallel and comparable corpora and contains human and machine translated texts. Moreover, it includes texts written by multiple types of authors (from average to professionals). With the obtained dataset, we conduct a systematic and rigorous evaluation of several state-of-the-art cross-language textual similarity detection methods. The evaluation results are reviewed and discussed. Finally, dataset and scripts are made publicly available on GitHub: http://github.com/FerreroJeremy/Cross-Language-Dataset.",
"title": ""
},
{
"docid": "c38c2d8f7c21acc3fcb9b7d9ecc6d2d1",
"text": "In this paper we proposed new technique for human identification using fusion of both face and speech which can substantially improve the rate of recognition as compared to the single biometric identification for security system development. The proposed system uses principal component analysis (PCA) as feature extraction techniques which calculate the Eigen vectors and Eigen values. These feature vectors are compared using the similarity measure algorithm like Mahalanobis Distances for the decision making. The Mel-Frequency cestrum coefficients (MFCC) feature extraction techniques are used for speech recognition in our project. Cross correlation coefficients are considered as primary features. The Hidden Markov Model (HMM) is used to calculate the like hoods in the MFCC extracted features to make the decision about the spoken wards.",
"title": ""
},
{
"docid": "c8984cf950244f0d300c6446bcb07826",
"text": "The grounded theory approach to doing qualitative research in nursing has become very popular in recent years. I confess to never really having understood Glaser and Strauss' original book: The Discovery of Grounded Theory. Since they wrote it, they have fallen out over what grounded theory might be and both produced their own versions of it. I welcomed, then, Kathy Charmaz's excellent and practical guide.",
"title": ""
},
{
"docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd",
"text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.",
"title": ""
},
{
"docid": "abec336a59db9dd1fdea447c3c0ff3d3",
"text": "Neural network training relies on our ability to find “good” minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and wellchosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple “filter normalization” method that helps us visualize loss function curvature and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.",
"title": ""
},
{
"docid": "8c95392ab3cc23a7aa4f621f474d27ba",
"text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.",
"title": ""
},
{
"docid": "2062b94ee661e5e50cbaa1c952043114",
"text": "The harsh operating environment of the automotive application makes the semi-permanent connector susceptible to intermittent high contact resistance which eventually leads to failure. Fretting corrosion is often the cause of these failures. However, laboratory testing of sample contact materials produce results that do not correlate with commercially tested connectors. A multicontact (M-C) reliability model is developed to bring together the fundamental studies and studies conducted on commercially available connector terminals. It is based on fundamental studies of the single contact interfaces and applied to commercial multicontact terminals. The model takes into consideration firstly, that a single contact interface may recover to low contact resistance after attaining a high value and secondly, that a terminal consists of more than one contact interface. For the connector to fail, all contact interfaces have to be in the failed state at the same time.",
"title": ""
},
{
"docid": "d8a7ab2abff4c2e5bad845a334420fe6",
"text": "Tone-mapping operators (TMOs) are designed to generate perceptually similar low-dynamic-range images from high-dynamic-range ones. We studied the performance of 15 TMOs in two psychophysical experiments where observers compared the digitally generated tone-mapped images to their corresponding physical scenes. All experiments were performed in a controlled environment, and the setups were designed to emphasize different image properties: in the first experiment we evaluated the local relationships among intensity levels, and in the second one we evaluated global visual appearance among physical scenes and tone-mapped images, which were presented side by side. We ranked the TMOs according to how well they reproduced the results obtained in the physical scene. Our results show that ranking position clearly depends on the adopted evaluation criteria, which implies that, in general, these tone-mapping algorithms consider either local or global image attributes but rarely both. Regarding the question of which TMO is the best, KimKautz [\"Consistent tone reproduction,\" in Proceedings of Computer Graphics and Imaging (2008)] and Krawczyk [\"Lightness perception in tone reproduction for high dynamic range images,\" in Proceedings of Eurographics (2005), p. 3] obtained the better results across the different experiments. We conclude that more thorough and standardized evaluation criteria are needed to study all the characteristics of TMOs, as there is ample room for improvement in future developments.",
"title": ""
},
{
"docid": "d0cdbd1137e9dca85d61b3d90789d030",
"text": "In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).",
"title": ""
},
{
"docid": "79425b2b27a8f80d2c4012c76e6eb8f6",
"text": "This paper examines previous Technology Acceptance Model (TAM)-related studies in order to provide an expanded model that explains consumers’ acceptance of online purchasing. Our model provides extensions to the original TAM by including constructs such as social influence and voluntariness; it also examines the impact of external variables including trust, privacy, risk, and e-loyalty. We surveyed consumers in the United States and Australia. Our findings suggest that our expanded model serves as a very good predictor of consumers’ online purchasing behaviors. The linear regression model shows a respectable amount of variance explained for Behavioral Intention (R 2 = .627). Suggestions are provided for the practitioner and ideas are presented for future research.",
"title": ""
},
{
"docid": "b591b75b4653c01e3525a0889e7d9b90",
"text": "The concept of isogeometric analysis is proposed. Basis functions generated from NURBS (Non-Uniform Rational B-Splines) are employed to construct an exact geometric model. For purposes of analysis, the basis is refined and/or its order elevated without changing the geometry or its parameterization. Analogues of finite element hand p-refinement schemes are presented and a new, more efficient, higher-order concept, k-refinement, is introduced. Refinements are easily implemented and exact geometry is maintained at all levels without the necessity of subsequent communication with a CAD (Computer Aided Design) description. In the context of structural mechanics, it is established that the basis functions are complete with respect to affine transformations, meaning that all rigid body motions and constant strain states are exactly represented. Standard patch tests are likewise satisfied. Numerical examples exhibit optimal rates of convergence for linear elasticity problems and convergence to thin elastic shell solutions. A k-refinement strategy is shown to converge toward monotone solutions for advection–diffusion processes with sharp internal and boundary layers, a very surprising result. It is argued that isogeometric analysis is a viable alternative to standard, polynomial-based, finite element analysis and possesses several advantages. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b7c0864be28d70d49ae4a28fb7d78f04",
"text": "UNLABELLED\nThe replacement of crowns and bridges is a common procedure for many dental practitioners. When correctly planned and executed, fixed prostheses will provide predictable function, aesthetics and value for money. However, when done poorly, they are more likely to fail prematurely and lead to irreversible damage to the teeth and supporting structures beneath. Sound diagnosis, assessment and technical skills are essential when dealing with failed or failing fixed restorations. These skills are essential for the 21st century dentist. This paper, with treated clinical examples, illustrates the areas of technical skill and clinical decisions needed for this type of work. It also provides advice on how the risk of premature failure can, in general, be further reduced. The article also confirms the very real risk in the UK of dento-legal problems when patients experience unexpected problems with their crowns and bridges.\n\n\nCLINICAL RELEVANCE\nThis paper outlines clinical implications of failed fixed prosthodontics to the dental surgeon. It also discusses factors that we can all use to predict and reduce the risk of premature restoration failure. Restoration design, clinical execution and patient factors are the most frequent reasons for premature problems. It is worth remembering (and informing patients) that the health of the underlying supporting dental tissue is often irreversibly compromised at the time of fixed restoration failure.",
"title": ""
},
{
"docid": "dc883936f3cc19008983c9a5bb2883f3",
"text": "Laparoscopic surgery provides patients with less painful surgery but is more demanding for the surgeon. The increased technological complexity and sometimes poorly adapted equipment have led to increased complaints of surgeon fatigue and discomfort during laparoscopic surgery. Ergonomic integration and suitable laparoscopic operating room environment are essential to improve efficiency, safety, and comfort for the operating team. Understanding ergonomics can not only make life of surgeon comfortable in the operating room but also reduce physical strains on surgeon.",
"title": ""
},
{
"docid": "e9b438cfe853e98f05b661f9149c0408",
"text": "Misinformation and fact-checking are opposite forces in the news environment: the former creates inaccuracies to mislead people, while the latter provides evidence to rebut the former. These news articles are often posted on social media and attract user engagement in the form of comments. In this paper, we investigate linguistic (especially emotional and topical) signals expressed in user comments in the presence of misinformation and fact-checking. We collect and analyze a dataset of 5,303 social media posts with 2,614,374 user comments from Facebook, Twitter, and YouTube, and associate these posts to fact-check articles from Snopes and PolitiFact for veracity rulings (i.e., from true to false). We find that linguistic signals in user comments vary significantly with the veracity of posts, e.g., we observe more misinformation-awareness signals and extensive emoji and swear word usage with falser posts. We further show that these signals can help to detect misinformation. In addition, we find that while there are signals indicating positive effects after fact-checking, there are also signals indicating potential \"backfire\" effects.",
"title": ""
},
{
"docid": "cf5829d1bfa1ae243bbf67776b53522d",
"text": "There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.",
"title": ""
},
{
"docid": "018b25742275dd628c58208e5bd5a532",
"text": "Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.",
"title": ""
},
{
"docid": "6ef04225b5f505a48127594a12fef112",
"text": "For differential operators of order 2, this paper presents a new method that combines generalized exponents to find those solutions that can be represented in terms of Bessel functions.",
"title": ""
}
] |
scidocsrr
|
417f95d28b4612f677364407a40b49ee
|
Analysis and taxonomy of column header categories for web tables
|
[
{
"docid": "823c0e181286d917a610f90d1c9db0c3",
"text": "Table characteristics vary widely. Consequently, a great variety of computational approaches have been applied to table recognition. In this survey, the table recognition literature is presented as an interaction of table models, observations, transformations and inferences. A table model defines the physical and logical structure of tables; the model is used to detect tables, and to analyze and decompose the detected tables. Observations perform feature measurements and data lookup, transformations alter or restructure data, and inferences generate and test hypotheses. This presentation clarifies the decisions that are made by a table recognizer, and the assumptions and inferencing techniques that underlie these decisions.",
"title": ""
}
] |
[
{
"docid": "16d1ade9aa0c9966905441752c9ea90c",
"text": "Many agricultural studies rely on infrared sensors for remote measurement of surface temperatures for crop status monitoring and estimating sensible and latent heat fluxes. Historically, applications for these non-contact thermometers employed the use of hand-held or stationary industrial infrared thermometers (IRTs) wired to data loggers. Wireless sensors in agricultural applications are a practical alternative, but the availability of low cost wireless IRTs is limited. In this study, we designed prototype narrow (10◦) field of view wireless infrared sensor modules and evaluated the performance of the IRT sensor by comparing temperature readings of an object (Tobj) against a blackbody calibrator in a controlled temperature room at ambient temperatures of 15 ◦C, 25 ◦C, 35 ◦C, and 45 ◦C. Additional comparative readings were taken over plant and soil samples alongside a hand-held IRT and over an isothermal target in the outdoors next to a wired IRT. The average root mean square error (RMSE) and mean absolute error (MAE) between the collected IRT object temperature readings and the blackbody target ranged between 0.10 and 0.79 ◦C. The wireless IRT readings also compared well with the hand-held IRT and wired industrial IRT. Additional tests performed to investigate the influence of direct radiation on IRT measurements indicated that housing the sensor in white polyvinyl chloride provided ample shielding for the self-compensating circuitry of the IR detector. The relatively low cost of the wireless IRT modules and repeatable measurements against a blackbody calibrator and commercial IR thermometers demonstrated that these wireless prototypes have the potential to provide accurate surface radiometric temperature readings in outdoor applications. Further studies are needed to thoroughly test radio frequency communication and power consumption characteristics in an outdoor setting. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "5cd3809ab7ed083de14bb622f12373fe",
"text": "The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and efficient generation of wrappers, the development of tools for wrapper maintenance has received less attention. This is an important research problem because Web sources often change in ways that prevent the wrappers from extracting data correctly. We present an efficient algorithm that learns structural information about data from positive examples alone. We describe how this information can be used for two wrapper maintenance applications: wrapper verification and reinduction. The wrapper verification system detects when a wrapper is not extracting correct data, usually because the Web source has changed its format. The reinduction algorithm automatically recovers from changes in the Web source by identifying data on Web pages so that a new wrapper may be generated for this source. To validate our approach, we monitored 27 wrappers over a period of a year. The verification algorithm correctly discovered 35 of the 37 wrapper changes, and made 16 mistakes, resulting in precision of 0.73 and recall of 0.95. We validated the reinduction algorithm on ten Web sources. We were able to successfully reinduce the wrappers, obtaining precision and recall values of 0.90 and 0.80 on the data extraction task.",
"title": ""
},
{
"docid": "c45a494afc622ec7ab5af78098945eeb",
"text": "While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image. In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step. We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception-v3 and ResNet-101.",
"title": ""
},
{
"docid": "5208762a8142de095c21824b0a395b52",
"text": "Battery storage (BS) systems are static energy conversion units that convert the chemical energy directly into electrical energy. They exist in our cars, laptops, electronic appliances, micro electricity generation systems and in many other mobile to stationary power supply systems. The economic advantages, partial sustainability and the portability of these units pose promising substitutes for backup power systems for hybrid vehicles and hybrid electricity generation systems. Dynamic behaviour of these systems can be analysed by using mathematical modeling and simulation software programs. Though, there have been many mathematical models presented in the literature and proved to be successful, dynamic simulation of these systems are still very exhaustive and time consuming as they do not behave according to specific mathematical models or functions. The charging and discharging of battery functions are a combination of exponential and non-linear nature. The aim of this research paper is to present a suitable convenient, dynamic battery model that can be used to model a general BS system. Proposed model is a new modified dynamic Lead-Acid battery model considering the effect of temperature and cyclic charging and discharging effects. Simulink has been used to study the characteristics of the system and the proposed system has proved to be very successful as the simulation results have been very good. Keywords—Simulink Matlab, Battery Model, Simulation, BS Lead-Acid, Dynamic modeling, Temperature effect, Hybrid Vehicles.",
"title": ""
},
{
"docid": "fc7efee1840ef385537f1686859da87c",
"text": "The self-oscillating converter is a popular circuit for cost-sensitive applications due to its simplicity and low component count. It is widely employed in mobile phone charges and as the stand-by power source in offline power supplies for data-processing equipment. However, this circuit almost was not explored for supplier Power LEDs. This paper presents a self-oscillating buck power electronics driver for supply directly Power LEDs, with no additional circuit. A simplified mathematical model of LED was used to characterize the self-oscillating converter for the power LED driver. In order to improve the performance of the proposed buck converter in this work the control of the light intensity of LEDs was done using a microcontroller to emulate PWM modulation with frequency 200 Hz. At using the converter proposed the effects of the LED manufacturing tolerances and drifts over temperature almost has no influence on the LED average current.",
"title": ""
},
{
"docid": "4b3576e6451fa78886ce440e55b04979",
"text": "In this paper, we model the document revision detection problem as a minimum cost branching problem that relies on computing document distances. Furthermore, we propose two new document distance measures, word vector-based Dynamic Time Warping (wDTW) and word vector-based Tree Edit Distance (wTED). Our revision detection system is designed for a large scale corpus and implemented in Apache Spark. We demonstrate that our system can more precisely detect revisions than state-of-the-art methods by utilizing the Wikipedia revision dumps 1 and simulated data sets.",
"title": ""
},
{
"docid": "3d895fa9057d76ed0488f530a18f15c4",
"text": "Nowadays, computer interaction is mostly done using dedicated devices. But gestures are an easy mean of expression between humans that could be used to communicate with computers in a more natural manner. Most of the current research on hand gesture recognition for HumanComputer Interaction rely on either the Neural Networks or Hidden Markov Models (HMMs). In this paper, we compare different approaches for gesture recognition and highlight the major advantages of each. We show that gestures recognition based on the Bio-mechanical characteristic of the hand provides an intuitive approach which provides more accuracy and less complexity.",
"title": ""
},
{
"docid": "79262b2834a9f6979d2e10d3464a279d",
"text": "An interleaved totem-pole boost bridgeless rectifier with reduced reverse-recovery problems for power factor correction is proposed in this paper. The proposed converter consists of two interleaved and intercoupled totem-pole boost bridgeless converter cells. The two cells operate in phase-shift mode. Thus, the input current can be continuous with low ripple. For the individual cells, they operate alternatively in discontinuous current mode and the maximum duty ratio is 50%, which allows shifting the diode current with low di/dt rate to achieve zero-current switching off. Zero-voltage switching is achieved in the MOSFETs under low line input. Furthermore, the merits of totem-pole topology are inherited. The common-mode (CM) noise interference is rather low. And the potential capacity of bidirectional power conversion is retained. In brief, the conduction losses are reduced, reverse-recovery process is improved, and high efficiency is achieved. The interleaved totem-pole cell can also be applied to bidirectional dc/dc converters and ac/dc converters. Finally, an 800 W, 100 kHz experimental prototype was built to verify the theoretical analysis and feasibility of the proposed converter, whose efficiency is above 95.5% at full load under 90 V.",
"title": ""
},
{
"docid": "1057ed913b857d0b22f5c535f919d035",
"text": "The purpose of this series is to convey the principles governing our aesthetic senses. Usually meaning visual perception, aesthetics is not merely limited to the ocular apparatus. The concept of aesthetics encompasses both the time-arts such as music, theatre, literature and film, as well as space-arts such as paintings, sculpture and architecture.",
"title": ""
},
{
"docid": "d34bfe5e6c374763f5fdf1987e4ea8ce",
"text": "BACKGROUND\nIt is not clear whether relaxation therapies are more or less effective than cognitive and behavioural therapies in the treatment of anxiety. The aims of the present study were to examine the effects of relaxation techniques compared to cognitive and behavioural therapies in reducing anxiety symptoms, and whether they have comparable efficacy across disorders.\n\n\nMETHOD\nWe conducted a meta-analysis of 50 studies (2801 patients) comparing relaxation training with cognitive and behavioural treatments of anxiety.\n\n\nRESULTS\nThe overall effect size (ES) across all anxiety outcomes, with only one combined ES in each study, was g = -0.27 [95% confidence interval (CI) = -0.41 to -0.13], favouring cognitive and behavioural therapies (number needed to treat = 6.61). However, no significant difference between relaxation and cognitive and behavioural therapies was found for generalized anxiety disorder, panic disorder, social anxiety disorder and specific phobias (considering social anxiety and specific phobias separately). Heterogeneity was moderate (I2 = 52; 95% CI = 33-65). The ES was significantly associated with age (p < 0.001), hours of cognitive and/or behavioural therapy (p = 0.015), quality of intervention (p = 0.007), relaxation treatment format (p < 0.001) and type of disorder (p = 0.008), explaining an 82% of variance.\n\n\nCONCLUSIONS\nRelaxation seems to be less effective than cognitive and behavioural therapies in the treatment of post-traumatic stress disorder, and obsessive-compulsive disorder and it might also be less effective at 1-year follow-up for panic, but there is no evidence that it is less effective for other anxiety disorders.",
"title": ""
},
{
"docid": "5cb44c68cecb0618be14cd52182dc96e",
"text": "Recognition of objects using Deep Neural Networks is an active area of research and many breakthroughs have been made in the last few years. The paper attempts to indicate how far this field has progressed. The paper briefly describes the history of research in Neural Networks and describe several of the recent advances in this field. The performances of recently developed Neural Network Algorithm over benchmark datasets have been tabulated. Finally, some the applications of this field have been provided.",
"title": ""
},
{
"docid": "01594ac29e66b229dbfacd0e1a967e3c",
"text": "This article describes two approaches for computing the line-of-sight between objects in real terrain data. Our purpose is to find an efficient algorithm for combat elements in warfare simulation such as soldiers, troops, vehicles, ships, and aircrafts, thus allowing a simulated combat theater.",
"title": ""
},
{
"docid": "256afadf1604bd8c5c1413555cb892a4",
"text": "A continuous-time dynamic model of a network of N nonlinear elements interacting via random asymmetric couplings is studied. A self-consistent mean-field theory, exact in the N ~ limit, predicts a transition from a stationary phase to a chaotic phase occurring at a critical value of the gain parameter. The autocorrelations of the chaotic flow as well as the maximal Lyapunov exponent are calculated.",
"title": ""
},
{
"docid": "640c3820afd2cca50d85762306a6c955",
"text": "Feature selection is an important technique for alleviating the curse of dimensionality. Unsupervised feature selection is more challenging than its supervised counterpart due to the lack of labels. In this paper, we present an effective method, Stochastic Neighborpreserving Feature Selection (SNFS), for selecting discriminative features in unsupervised setting. We employ the concept of stochastic neighbors and select the features that can best preserve such stochastic neighbors by minimizing the KullbackLeibler (KL) Divergence between neighborhood distributions. The proposed approach measures feature utility jointly in a nonlinear way and discriminative features can be selected due to its ’push-pull’ property. We develop an efficient algorithm for optimizing the objective function based on projected quasi-Newton method. Moreover, few existing methods provide ways for determining the optimal number of selected features and this hampers their utility in practice. Our approach is equipped with a guideline for choosing the number of features, which provides nearly optimal performance in our experiments. Experimental results show that the proposed method outperforms state-ofthe-art methods significantly on several realworld datasets.",
"title": ""
},
{
"docid": "7921953332c107314c0dc15a7911677a",
"text": "Abstract: High precision control is desirable for future weapon systems. In this paper, several control design methodologies are applied to a weapon system to assess the applicability of each control design method and to characterize the achievable performance of the gun-turret system in precision control. The design objective of the gun-turret control system is to achieve a rapid and precise tracking response with respect to the turret motor command from the fire control system under the influences of disturbances, nonlinearities, and modeling uncertainties. A fuzzy scheme is proposed for control of multi-body, multi-input and multioutput nonlinear systems with joints represented by a gun turret-barrel model which consists of two subsystems: two motors driving two loads (turret and barrel) coupled by nonlinear dynamics. Fuzzy control schemes are employed for compensation and nonlinear feedback control laws are used for control of nonlinear dynamics. Fuzzy logic control (FLC) provides an effective means of capturing the approximate, inexact nature of the real world, and to address unexpected parameter variations and anomalies. Viewed in this perspective, the essential part of the FLC is a set of linguistic control rules related by the dual concepts of fuzzy implication and the compositional rule of inference. In essence, the FLC provides an algorithm which can convert the linguistic control strategy based on expert knowledge into an automatic control strategy. Accordingly, the design must be robust, adaptive, and, hopefully, intelligent in order to accommodate these uncertainties. Simulation results verify the desired system tracking performance.",
"title": ""
},
{
"docid": "e0bb1bdcba38bcfbcc7b2da09cd05a3f",
"text": "Reconstructing the 3D surface from a set of provided range images – acquired by active or passive sensors – is an important step to generate faithful virtual models of real objects or environments. Since several approaches for high quality fusion of range images are already known, the runtime efficiency of the respective methods are of increased interest. In this paper we propose a highly efficient method for range image fusion resulting in very accurate 3D models. We employ a variational formulation for the surface reconstruction task. The global optimal solution can be found by gradient descent due to the convexity of the underlying energy functional. Further, the gradient descent procedure can be parallelized, and consequently accelerated by graphics processing units. The quality and runtime performance of the proposed method is demonstrated on wellknown multi-view stereo benchmark datasets.",
"title": ""
},
{
"docid": "1ba0fc680e21a6db070838cbf9267c8d",
"text": "The study reported in this paper developed and evaluated a web-based concept map testing system for science students. Thirty-eight Taiwanese high school students were involved and it was found that their performance on the system was not significantly related to their achievement as measured by traditional standard tests. Their views about the use of the system, in general, were positive. An analysis of students’ future use of the system and their motivation and learning strategies revealed that those with more critical thinking metacognitive activities and an effort regulation management strategy showed more willingness to use the online testing system. Moreover, students with high test anxiety showed a preference to be tested through the system.",
"title": ""
},
{
"docid": "dc418c7add2456b08bc3a6f15b31da9f",
"text": "In professional search environments, such as patent search or legal search, search tasks have unique characteristics: 1) users interactively issue several queries for a topic, and 2) users are willing to examine many retrieval results, i.e., there is typically an emphasis on recall. Recent surveys have also verified that professional searchers continue to have a strong preference for Boolean queries because they provide a record of what documents were searched. To support this type of professional search, we propose a novel Boolean query suggestion technique. Specifically, we generate Boolean queries by exploiting decision trees learned from pseudo-labeled documents and rank the suggested queries using query quality predictors. We evaluate our algorithm in simulated patent and medical search environments. Compared with a recent effective query generation system, we demonstrate that our technique is effective and general.",
"title": ""
},
{
"docid": "6e17362c0e6a4d3190b3c8b0a11d6844",
"text": "A transimpedance amplifier (TIA) has been designed in a 0.35 μm digital CMOS technology for Gigabit Ethernet. It is based on the structure proposed by Mengxiong Li [1]. This paper presents an amplifier which exploits the regulated cascode (RGC) configuration as the input stage with an integrated optical receiver which consists of an integrated photodetector, thus achieving as large effective input transconductance as that of Si Bipolar or GaAs MESFET. The RGC input configuration isolates the input parasitic capacitance including photodiode capacitance from the bandwidth determination better than common-gate TIA. A series inductive peaking is used for enhancing the bandwidth. The proposed TIA has transimpedance gain of 51.56 dBΩ, and 3-dB bandwidth of 6.57 GHz with two inductor between the RGC and source follower for 0.1 pF photodiode capacitance. The proposed TIA has an input courant noise level of about 21.57 pA/Hz0.5 and it consumes DC power of 16 mW from 3.3 V supply voltage.",
"title": ""
},
{
"docid": "71f7ce3b6e4a20a112f6a1ae9c22e8e1",
"text": "The neural correlates of many emotional states have been studied, most recently through the technique of fMRI. However, nothing is known about the neural substrates involved in evoking one of the most overwhelming of all affective states, that of romantic love, about which we report here. The activity in the brains of 17 subjects who were deeply in love was scanned using fMRI, while they viewed pictures of their partners, and compared with the activity produced by viewing pictures of three friends of similar age, sex and duration of friendship as their partners. The activity was restricted to foci in the medial insula and the anterior cingulate cortex and, subcortically, in the caudate nucleus and the putamen, all bilaterally. Deactivations were observed in the posterior cingulate gyrus and in the amygdala and were right-lateralized in the prefrontal, parietal and middle temporal cortices. The combination of these sites differs from those in previous studies of emotion, suggesting that a unique network of areas is responsible for evoking this affective state. This leads us to postulate that the principle of functional specialization in the cortex applies to affective states as well.",
"title": ""
}
] |
scidocsrr
|
0398d5cfcd43924eb95e0a856202be73
|
Microscopy cell counting and detection with fully convolutional regression networks
|
[
{
"docid": "2e7d42b44affb9fa1c12833ea8b00a96",
"text": "The objective of this work is human pose estimation in videos, where multiple frames are available. We investigate a ConvNet architecture that is able to benefit from temporal context by combining information across the multiple frames using optical flow. To this end we propose a network architecture with the following novelties: (i) a deeper network than previously investigated for regressing heatmaps, (ii) spatial fusion layers that learn an implicit spatial model, (iii) optical flow is used to align heatmap predictions from neighbouring frames, and (iv) a final parametric pooling layer which learns to combine the aligned heatmaps into a pooled confidence map. We show that this architecture outperforms a number of others, including one that uses optical flow solely at the input layers, one that regresses joint coordinates directly, and one that predicts heatmaps without spatial fusion. The new architecture outperforms the state of the art by a large margin on three video pose estimation datasets, including the very challenging Poses in the Wild dataset, and outperforms other deep methods that don't use a graphical model on the single-image FLIC benchmark (and also [5, 35] in the high precision region).",
"title": ""
}
] |
[
{
"docid": "01d34357d5b8dbf4b89d3f8683f6fc58",
"text": "Reinforcement learning (RL), while often powerful, can suffer from slow learning speeds, particularly in high dimensional spaces. The autonomous decomposition of tasks and use of hierarchical methods hold the potential to significantly speed up learning in such domains. This paper proposes a novel practical method that can autonomously decompose tasks, by leveraging association rule mining, which discovers hidden relationship among entities in data mining. We introduce a novel method called ARM-HSTRL (Association Rule Mining to extract Hierarchical Structure of Tasks in Reinforcement Learning). It extracts temporal and structural relationships of sub-goals in RL, and multi-task RL. In particular,it finds sub-goals and relationship among them. It is shown the significant efficiency and performance of the proposed method in two main topics of RL.",
"title": ""
},
{
"docid": "c65f050e911abb4b58b4e4f9b9aec63b",
"text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.",
"title": ""
},
{
"docid": "1e2006e93ad382b3997736e446c2dff2",
"text": "Classical distillation methods transfer representations from a “teacher” neural network to a “student” network by matching their output activations. Recent methods also match the Jacobians, or the gradient of output activations with the input. However, this involves making some ad hoc decisions, in particular, the choice of the loss function. In this paper, we first establish an equivalence between Jacobian matching and distillation with input noise, from which we derive appropriate loss functions for Jacobian matching. We then rely on this analysis to apply Jacobian matching to transfer learning by establishing equivalence of a recent transfer learning procedure to distillation. We then show experimentally on standard image datasets that Jacobian-based penalties improve distillation, robustness to noisy inputs, and transfer learning.",
"title": ""
},
{
"docid": "7d228b0da98868e92ab5ae13abddb29b",
"text": "An important challenge for human-like AI is compositional semantics. Recent research has attempted to address this by using deep neural networks to learn vector space embeddings of sentences, which then serve as input to other tasks. We present a new dataset for one such task, “natural language inference” (NLI), that cannot be solved using only word-level knowledge and requires some compositionality. We find that the performance of state of the art sentence embeddings (InferSent; Conneau et al., 2017) on our new dataset is poor. We analyze the decision rules learned by InferSent and find that they are consistent with simple heuristics that are ecologically valid in its training dataset. Further, we find that augmenting training with our dataset improves test performance on our dataset without loss of performance on the original training dataset. This highlights the importance of structured datasets in better understanding and improving AI systems.",
"title": ""
},
{
"docid": "cde4d7457b949420ab90bdc894f40eb0",
"text": "We study the problem of named entity recognition (NER) from electronic medical records, which is one of the most fundamental and critical problems for medical text mining. Medical records which are written by clinicians from different specialties usually contain quite different terminologies and writing styles. The difference of specialties and the cost of human annotation makes it particularly difficult to train a universal medical NER system. In this paper, we propose a labelaware double transfer learning framework (LaDTL) for cross-specialty NER, so that a medical NER system designed for one specialty could be conveniently applied to another one with minimal annotation efforts. The transferability is guaranteed by two components: (i) we propose label-aware MMD for feature representation transfer, and (ii) we perform parameter transfer with a theoretical upper bound which is also label aware. We conduct extensive experiments on 12 cross-specialty NER tasks. The experimental results demonstrate that La-DTL provides consistent accuracy improvement over strong baselines. Besides, the promising experimental results on non-medical NER scenarios indicate that LaDTL is potential to be seamlessly adapted to a wide range of NER tasks.",
"title": ""
},
{
"docid": "9ae29655fc75ad277fa541d0930d58bc",
"text": "Rapid and ongoing change creates novelty in ecosystems everywhere, both when comparing contemporary systems to their historical baselines, and predicted future systems to the present. However, the level of novelty varies greatly among places. Here we propose a formal and quantifiable definition of abiotic and biotic novelty in ecosystems, map abiotic novelty globally, and discuss the implications of novelty for the science of ecology and for biodiversity conservation. We define novelty as the degree of dissimilarity of a system, measured in one or more dimensions relative to a reference baseline, usually defined as either the present or a time window in the past. In this conceptualization, novelty varies in degree, it is multidimensional, can be measured, and requires a temporal and spatial reference. This definition moves beyond prior categorical definitions of novel ecosystems, and does not include human agency, self-perpetuation, or irreversibility as criteria. Our global assessment of novelty was based on abiotic factors (temperature, precipitation, and nitrogen deposition) plus human population, and shows that there are already large areas with high novelty today relative to the early 20th century, and that there will even be more such areas by 2050. Interestingly, the places that are most novel are often not the places where absolute changes are largest; highlighting that novelty is inherently different from change. For the ecological sciences, highly novel ecosystems present new opportunities to test ecological theories, but also challenge the predictive ability of ecological models and their validation. For biodiversity conservation, increasing novelty presents some opportunities, but largely challenges. Conservation action is necessary along the entire continuum of novelty, by redoubling efforts to protect areas where novelty is low, identifying conservation opportunities where novelty is high, developing flexible yet strong regulations and policies, and establishing long-term experiments to test management approaches. Meeting the challenge of novelty will require advances in the science of ecology, and new and creative. conservation approaches.",
"title": ""
},
{
"docid": "c1a8e30586aad77395e429556545675c",
"text": "We investigate techniques for analysis and retrieval of object trajectories in a two or three dimensional space. Such kind of data usually contain a great amount of noise, that makes all previously used metrics fail. Therefore, here we formalize non-metric similarity functions based on the Longest Common Subsequence (LCSS), which are very robust to noise and furthermore provide an intuitive notion of similarity between trajectories by giving more weight to the similar portions of the sequences. Stretching of sequences in time is allowed, as well as global translating of the sequences in space. Efficient approximate algorithms that compute these similarity measures are also provided. We compare these new methods to the widely used Euclidean and Time Warping distance functions (for real and synthetic data) and show the superiority of our approach, especially under the strong presence of noise. We prove a weaker version of the triangle inequality and employ it in an indexing structure to answer nearest neighbor queries. Finally, we present experimental results that validate the accuracy and efficiency of our approach.",
"title": ""
},
{
"docid": "efe99cd2282373a7da3250af989b86e3",
"text": "In this work, analog application for the Sliding Mode Control (SMC) to piezoelectric actuators (PEA) is presented. DSP application of the algorithm suffers from ADC and DAC conversions and mainly faces limitations in sampling time interval. Moreover piezoelectric actuators are known to have very large bandwidth close to the DSP operation frequency. Therefore, with the direct analog application, improvement of the performance and high frequency operation are expected. Design of an appropriate SMC together with a disturbance observer is suggested to have continuous control output and related experimental results for position tracking are presented with comparison of DSP and analog control application.",
"title": ""
},
{
"docid": "cb413e9b170736fc746031fae567b168",
"text": "3D integration is a fast growing field that encompasses different types of technologies. The paper addresses one of the most promising technology which uses Through Silicon Vias (TSV) for interconnecting stacked devices on wafer level to perform high density interconnects with a good electrical performance at the smallest form factor for 3D architectures. Fraunhofer IZM has developed a post front-end 3D integration process which allows stacking of functional and tested FE-devices e.g. sensors, ASICs on wafer level as well as a technology portfolio for passive silicon interposer with redistribution layers and TSV.",
"title": ""
},
{
"docid": "c183e77e531141ea04b7ea95149be70a",
"text": "Millions of computer end users need to perform tasks over large spreadsheet data, yet lack the programming knowledge to do such tasks automatically. We present a programming by example methodology that allows end users to automate such repetitive tasks. Our methodology involves designing a domain-specific language and developing a synthesis algorithm that can learn programs in that language from user-provided examples. We present instantiations of this methodology for particular domains of tasks: (a) syntactic transformations of strings using restricted forms of regular expressions, conditionals, and loops, (b) semantic transformations of strings involving lookup in relational tables, and (c) layout transformations on spreadsheet tables. We have implemented this technology as an add-in for the Microsoft Excel Spreadsheet system and have evaluated it successfully over several benchmarks picked from various Excel help forums.",
"title": ""
},
{
"docid": "8c067af7b61fae244340e784149a9c9b",
"text": "Based on EuroNCAP regulations the number of autonomous emergency braking systems for pedestrians (AEB-P) will increase over the next years. According to accident research a considerable amount of severe pedestrian accidents happen at artificial lighting, twilight or total darkness conditions. Because radar sensors are very robust in these situations, they will play an important role for future AEB-P systems. To assess and evaluate systems a pedestrian dummy with reflection characteristics as close as possible to real humans is indispensable. As an extension to existing measurements in literature this paper addresses open issues like the influence of different positions of the limbs or different clothing for both relevant automotive frequency bands. Additionally suggestions and requirements for specification of pedestrian dummies based on results of RCS measurements of humans and first experimental developed dummies are given.",
"title": ""
},
{
"docid": "0069b06db18ea5d2c6079fcb9f1bae92",
"text": "State-of-the-art techniques in Generative Adversarial Networks (GANs) such as cycleGAN is able to learn the mapping of one image domain X to another image domain Y using unpaired image data. We extend the cycleGAN to Conditional cycleGAN such that the mapping from X to Y is subjected to attribute condition Z. Using face image generation as an application example, where X is a low resolution face image, Y is a high resolution face image, and Z is a set of attributes related to facial appearance (e.g. gender, hair color, smile), we present our method to incorporate Z into the network, such that the hallucinated high resolution face image Y ′ not only satisfies the low resolution constrain inherent in X , but also the attribute condition prescribed by Z. Using face feature vector extracted from face verification network as Z, we demonstrate the efficacy of our approach on identitypreserving face image super-resolution. Our approach is general and applicable to high-quality face image generation where specific facial attributes can be controlled easily in the automatically generated results.",
"title": ""
},
{
"docid": "5d673d1b6755e3e1d451ca17644cf3ec",
"text": "The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm’s key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.",
"title": ""
},
{
"docid": "b9221d254083fe875c8e81bc8f442403",
"text": "On multi-core processors, applications are run sharing the cache. This paper presents optimization theory to co-locate applications to minimize cache interference and maximize performance. The theory precisely specifies MRC-based composition, optimization, and correctness conditions. The paper also presents a new technique called footprint symbiosis to obtain the best shared cache performance under fair CPU allocation as well as a new sampling technique which reduces the cost of locality analysis. When sampling and optimization are combined, the paper shows that it takes less than 0.1 second analysis per program to obtain a co-run that is within 1.5 percent of the best possible performance. In an exhaustive evaluation with 12,870 tests, the best prior work improves co-run performance by 56 percent on average. The new optimization improves it by another 29 percent. Without single co-run test, footprint symbiosis is able to choose co-run choices that are just 8 percent slower than the best co-run solutions found with exhaustive testing.",
"title": ""
},
{
"docid": "d9dd14f6c28ad3ae3814cb517e2430d1",
"text": "Volunteer geographical information (VGI), either in the context of citizen science or the mining of social media, has proven to be useful in various domains including natural hazards, health status, disease epidemics, and biological monitoring. Nonetheless, the variable or unknown data quality due to crowdsourcing settings are still an obstacle for fully integrating these data sources in environmental studies and potentially in policy making. The data curation process, in which a quality assurance (QA) is needed, is often driven by the direct usability of the data collected within a data conflation process or data fusion (DCDF), combining the crowdsourced data into one view, using potentially other data sources as well. Looking at current practices in VGI data quality and using two examples, namely land cover validation and inundation extent estimation, this paper discusses the close links between QA and DCDF. It aims to help in deciding whether a disentanglement can be possible, whether beneficial or not, in understanding the data curation process with respect to its methodology for future usage of crowdsourced data. Analysing situations throughout the data curation process where and when entanglement between QA and DCDF occur, the paper explores the various facets of VGI data capture, as well as data quality assessment and purposes. Far from rejecting the usability ISO quality criterion, the paper advocates for a decoupling of the QA process and the DCDF step as much as possible while still integrating them within an approach analogous to a Bayesian paradigm.",
"title": ""
},
{
"docid": "714df72467bc3e919b7ea7424883cf26",
"text": "Although a lot of attention has been paid to software cost estimation since 1960, making accurate effort and schedule estimation is still a challenge. To collect evidence and identify potential areas of improvement in software cost estimation, it is important to investigate the estimation accuracy, the estimation method used, and the factors influencing the adoption of estimation methods in current industry. This paper analyzed 112 projects from the Chinese software project benchmarking dataset and conducted questionnaire survey on 116 organizations to investigate the above information. The paper presents the current situations related to software project estimation in China and provides evidence-based suggestions on how to improve software project estimation. Our survey results suggest, e.g., that large projects were more prone to cost and schedule overruns, that most computing managers and professionals were neither satisfied nor dissatisfied with the project estimation, that very few organizations (15%) used model-based methods, and that the high adoption cost and insignificant benefit after adoption were the main causes for low use of model-based methods.",
"title": ""
},
{
"docid": "27e1d29dc8d252081e80f93186a14660",
"text": "Over the last several years there has been an increasing focus on early detection of Autism Spectrum Disorder (ASD), not only from the scientific field but also from professional associations and public health systems all across Europe. Not surprisingly, in order to offer better services and quality of life for both children with ASD and their families, different screening procedures and tools have been developed for early assessment and intervention. However, current evidence is needed for healthcare providers and policy makers to be able to implement specific measures and increase autism awareness in European communities. The general aim of this review is to address the latest and most relevant issues related to early detection and treatments. The specific objectives are (1) analyse the impact, describing advantages and drawbacks, of screening procedures based on standardized tests, surveillance programmes, or other observational measures; and (2) provide a European framework of early intervention programmes and practices and what has been learnt from implementing them in public or private settings. This analysis is then discussed and best practices are suggested to help professionals, health systems and policy makers to improve their local procedures or to develop new proposals for early detection and intervention programmes.",
"title": ""
},
{
"docid": "93f89a636828df50dfe48ffa3e868ea6",
"text": "The reparameterization trick enables the optimization of large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-likelihood of latent stochastic nodes) on the corresponding discrete graph. We demonstrate their effectiveness on density estimation and structured prediction tasks using neural networks.",
"title": ""
},
{
"docid": "2cd327bd5a7814776825e090b12664ec",
"text": "is an open access repository that collects the work of Arts et Métiers ParisTech researchers and makes it freely available over the web where possible. This article proposes a method based on wavelet transform and neural networks for relating pupillary behavior to psychological stress. The proposed method was tested by recording pupil diameter and electrodermal activity during a simulated driving task. Self-report measures were also collected. Participants performed a baseline run with the driving task only, followed by three stress runs where they were required to perform the driving task along with sound alerts, the presence of two human evaluators, and both. Self-reports and pupil diameter successfully indexed stress manipulation, and significant correlations were found between these measures. However, electrodermal activity did not vary accordingly. After training, the four-way parallel neu-ral network classifier could guess whether a given unknown pupil diameter signal came from one of the four experimental trials with 79.2% precision. The present study shows that pupil diameter signal has good discriminating power for stress detection. 1. INTRODUCTION Stress detection and measurement are important issues in several human–computer interaction domains such as Affective Computing, Adaptive Automation, and Ambient Intelligence. In general, researchers and system designers seek to estimate the psychological state of operators in order to adapt or redesign the working environment accordingly (Sauter, 1991). The primary goal of such adaptation is to enhance overall system performance, trying to reduce workers' psychophysi-cal detriment (e. One key aspect of stress measurement concerns the recording of physiological parameters, which are known to be modulated by the autonomic nervous system (ANS). However, despite",
"title": ""
},
{
"docid": "0f8bf207201692ad4905e28a2993ef29",
"text": "Bluespec System Verilog is an EDL toolset for ASIC and FPGA design offering significantly higher productivity via a radically different approach to high-level synthesis. Many other attempts at high-level synthesis have tried to move the design language towards a more software-like specification of the behavior of the intended hardware. By means of code samples, demonstrations and measured results, we illustrate how Bluespec System Verilog, in an environment familiar to hardware designers, can significantly improve productivity without compromising generated hardware quality.",
"title": ""
}
] |
scidocsrr
|
235c7f8204b6bcf94d528543fcbb9097
|
Depth Separation for Neural Networks
|
[
{
"docid": "7d33ba30fd30dce2cd4a3f5558a8c0ba",
"text": "It has long been conjectured that hypothesis spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical architectures than with shallow ones. Despite the vast empirical evidence, formal arguments to date are limited and do not capture the kind of networks used in practice. Using tensor factorization, we derive a universal hypothesis space implemented by an arithmetic circuit over functions applied to local data structures (e.g. image patches). The resulting networks first pass the input through a representation layer, and then proceed with a sequence of layers comprising sum followed by product-pooling, where sum corresponds to the widely used convolution operator. The hierarchical structure of networks is born from factorizations of tensors based on the linear weights of the arithmetic circuits. We show that a shallow network corresponds to a rank-1 decomposition, whereas a deep network corresponds to a Hierarchical Tucker (HT) decomposition. Log-space computation for numerical stability transforms the networks into SimNets.",
"title": ""
},
{
"docid": "40b78c5378159e9cdf38275a773b8109",
"text": "For a common class of artificial neural networks, the mean integrated squared error between the estimated network and a target function f is shown to be bounded by $${\\text{O}}\\left( {\\frac{{C_f^2 }}{n}} \\right) + O(\\frac{{ND}}{N}\\log N)$$ where n is the number of nodes, d is the input dimension of the function, N is the number of training observations, and C f is the first absolute moment of the Fourier magnitude distribution of f. The two contributions to this total risk are the approximation error and the estimation error. Approximation error refers to the distance between the target function and the closest neural network function of a given architecture and estimation error refers to the distance between this ideal network function and an estimated network function. With n ~ C f(N/(dlog N))1/2 nodes, the order of the bound on the mean integrated squared error is optimized to be O(C f((d/N)log N)1/2). The bound demonstrates surprisingly favorable properties of network estimation compared to traditional series and nonparametric curve estimation techniques in the case that d is moderately large. Similar bounds are obtained when the number of nodes n is not preselected as a function of C f (which is generally not known a priori), but rather the number of nodes is optimized from the observed data by the use of a complexity regularization or minimum description length criterion. The analysis involves Fourier techniques for the approximation error, metric entropy considerations for the estimation error, and a calculation of the index of resolvability of minimum complexity estimation of the family of networks.",
"title": ""
},
{
"docid": "6efdf43a454ce7da51927c07f1449695",
"text": "We investigate efficient representations of functions that can be written as outputs of so-called sum-product networks, that alternate layers of product and sum operations (see Fig 1 for a simple sum-product network). We find that there exist families of such functions that can be represented much more efficiently by deep sum-product networks (i.e. allowing multiple hidden layers), compared to shallow sum-product networks (constrained to using a single hidden layer). For instance, there is a family of functions fn where n is the number of input variables, such that fn can be computed with a deep sum-product network of log 2 n layers and n−1 units, while a shallow sum-product network (two layers) requires 2 √ n−1 units. These mathematical results are in the same spirit as those by H̊astad and Goldmann (1991) on the limitations of small depth computational circuits. They motivate using deep networks to be able to model complex functions more efficiently than with shallow networks. Exponential gains in terms of the number of parameters are quite significant in the context of statistical machine learning. Indeed, the number of training samples required to optimize a model’s parameters without suffering from overfitting typically increases with the number of parameters. Deep networks thus offer a promising way to learn complex functions from limited data, even though parameter optimization may still be challenging.",
"title": ""
}
] |
[
{
"docid": "96d123a5c9a01922ebb99623fddd1863",
"text": "Previous studies have shown that Wnt signaling is involved in postnatal mammalian myogenesis; however, the downstream mechanism of Wnt signaling is not fully understood. This study reports that the murine four-and-a-half LIM domain 1 (Fhl1) could be stimulated by β-catenin or LiCl treatment to induce myogenesis. In contrast, knockdown of the Fhl1 gene expression in C2C12 cells led to reduced myotube formation. We also adopted reporter assays to demonstrate that either β-catenin or LiCl significantly activated the Fhl1 promoter, which contains four putative consensus TCF/LEF binding sites. Mutations of two of these sites caused a significant decrease in promoter activity by luciferase reporter assay. Thus, we suggest that Wnt signaling induces muscle cell differentiation, at least partly, through Fhl1 activation.",
"title": ""
},
{
"docid": "05092df698f691d35df8d4bc0008ec8f",
"text": "BACKGROUND\nPurpura fulminans is a rare and extremely severe infection, mostly due to Neisseria meningitidis frequently causing early orthopedic lesions. Few studies have reported on the initial surgical management of acute purpura fulminans. The aim of this study is to look at the predictive factors in orthopedic outcome in light of the initial surgical management in children surviving initial resuscitation.\n\n\nMETHODS\nNineteen patients referred to our institution between 1987 and 2005 were taken care of at the very beginning of the purpura fulminans. All cases were retrospectively reviewed so as to collect information on the total skin necrosis, vascular insufficiency, gangrene, and total duration of vasopressive treatment.\n\n\nRESULTS\nAll patients had multiorgan failure; only one never developed any skin necrosis or ischemia. Eighteen patients lost tissue, leading to 22 skin grafts, including two total skin grafts. There was only one graft failure. Thirteen patients were concerned by an amputation, representing, in total, 54 fingers, 36 toes, two transmetatarsal, and ten transtibial below-knee amputations, with a mean delay of 4 weeks after onset of the disease. Necrosis seems to affect mainly the lower limbs, but there is no predictive factor that impacted on the orthopedic outcome. We did not perform any fasciotomy or compartment pressure measurement to avoid non-perfusion worsening; nonetheless, our outcome in this series is comparable to existing series in the literature. V.A.C.(®) therapy could be promising regarding the management of skin necrosis in this particular context. While suffering from general multiorgan failure, great care should be observed not to miss any additional osseous or articular infection, as some patients also develop local osteitis and osteomyelitis that are often not diagnosed.\n\n\nCONCLUSIONS\nWe do not advocate very early surgery during the acute phase of purpura fulminans, as it does not change the orthopedic outcome in these children. By performing amputations and skin coverage some time after the acute phase, we obtained similar results to those found in the literature.",
"title": ""
},
{
"docid": "b7ee04e61d8666b6d865e69e24f69a6f",
"text": "CONTEXT\nThis article presents the main results from a large-scale analytical systematic review on knowledge exchange interventions at the organizational and policymaking levels. The review integrated two broad traditions, one roughly focused on the use of social science research results and the other focused on policymaking and lobbying processes.\n\n\nMETHODS\nData collection was done using systematic snowball sampling. First, we used prospective snowballing to identify all documents citing any of a set of thirty-three seminal papers. This process identified 4,102 documents, 102 of which were retained for in-depth analysis. The bibliographies of these 102 documents were merged and used to identify retrospectively all articles cited five times or more and all books cited seven times or more. All together, 205 documents were analyzed. To develop an integrated model, the data were synthesized using an analytical approach.\n\n\nFINDINGS\nThis article developed integrated conceptualizations of the forms of collective knowledge exchange systems, the nature of the knowledge exchanged, and the definition of collective-level use. This literature synthesis is organized around three dimensions of context: level of polarization (politics), cost-sharing equilibrium (economics), and institutionalized structures of communication (social structuring).\n\n\nCONCLUSIONS\nThe model developed here suggests that research is unlikely to provide context-independent evidence for the intrinsic efficacy of knowledge exchange strategies. To design a knowledge exchange intervention to maximize knowledge use, a detailed analysis of the context could use the kind of framework developed here.",
"title": ""
},
{
"docid": "b89f999bd27a6cbe1865f8853e384eba",
"text": "A rescue crawler robot with flipper arms has high ability to get over rough terrain, but it is hard to control its flipper arms in remote control. The authors aim at development of a semi-autonomous control system for the solution. In this paper, the authors propose a sensor reflexive method that controls these flippers autonomously for getting over unknown steps. Our proposed method is effective in unknown and changeable environment. The authors applied the proposed method to Aladdin, and examined validity of these control rules in unknown environment.",
"title": ""
},
{
"docid": "e1e836fe6ff690f9c85443d26a1448e3",
"text": "■ We describe an apparatus and methodology to support real-time color imaging for night operations. Registered imagery obtained in the visible through nearinfrared band is combined with thermal infrared imagery by using principles of biological opponent-color vision. Visible imagery is obtained with a Gen III image intensifier tube fiber-optically coupled to a conventional charge-coupled device (CCD), and thermal infrared imagery is obtained by using an uncooled thermal imaging array. The two fields of view are matched and imaged through a dichroic beam splitter to produce realistic color renderings of a variety of night scenes. We also demonstrate grayscale and color fusion of intensified-CCD/FLIR imagery. Progress in the development of a low-light-sensitive visible CCD imager with high resolution and wide intrascene dynamic range, operating at thirty frames per second, is described. Example low-light CCD imagery obtained under controlled illumination conditions, from full moon down to overcast starlight, processed by our adaptive dynamic-range algorithm, is shown. The combination of a low-light visible CCD imager and a thermal infrared microbolometer array in a single dualband imager, with a portable image-processing computer implementing our neuralnet algorithms, and color liquid-crystal display, yields a compact integrated version of our system as a solid-state color night-vision device. The systems described here can be applied to a large variety of military operations and civilian needs.",
"title": ""
},
{
"docid": "3419c35e0dff7b47328943235419a409",
"text": "Several methods of classification of partially edentulous arches have been proposed and are in use. The most familiar classifications are those originally proposed by Kennedy, Cummer, and Bailyn. None of these classification systems include implants, simply because most of them were proposed before implants became widely accepted. At this time, there is no classification system for partially edentulous arches incorporating implants placed or to be placed in the edentulous spaces for a removable partial denture (RPD). This article proposes a simple classification system for partially edentulous arches with implants based on the Kennedy classification system, with modification, to be used for RPDs. It incorporates the number and positions of implants placed or to be placed in the edentulous areas. A different name, Implant-Corrected Kennedy (ICK) Classification System, is given to the new classification system to be differentiated from other partially edentulous arch classification systems.",
"title": ""
},
{
"docid": "f6f984853e9fa9a77e3f2c473a9a05d8",
"text": "Autonomous driving within the pedestrian environment is always challenging, as the perception ability is limited by the crowdedness and the planning process is constrained by the complicated human behaviors. In this paper, we present a vehicle planning system for self-driving with limited perception in the pedestrian environment. Acknowledging the difficulty of obstacle detection and tracking within the crowded pedestrian environment, only the raw LIDAR sensing data is employed for the purpose of traversability analysis and vehicle planning. The designed vehicle planning system has been experimentally validated to be robust and safe within the populated pedestrian environment.",
"title": ""
},
{
"docid": "0e012c89f575d116e94b1f6718c8fe4d",
"text": "Tagging is an increasingly important task in natural language processing domains. As there are many natural language processing tasks which can be improved by applying disambiguation to the text, fast and high quality tagging algorithms are a crucial task in information retrieval and question answering. Tagging aims to assigning to each word of a text its correct tag according to the context in which the word is used. Part Of Speech (POS) tagging is a difficult problem by itself, since many words has a number of possible tags associated to it. In this paper we present a novel algorithm that deals with POS-tagging problem based on Harmony Search (HS) optimization method. This paper analyzes the relative advantages of HS metaheuristic approache to the well-known natural language processing problem of POS-tagging. In the experiments we conducted, we applied the proposed algorithm on linguistic corpora and compared the results obtained against other optimization methods such as genetic and simulated annealing algorithms. Experimental results reveal that the proposed algorithm provides more accurate results compared to the other algorithms.",
"title": ""
},
{
"docid": "0506a7f5dddf874487c90025dff0bc7d",
"text": "This paper presents a low-power decision-feedback equalizer (DFE) receiver front-end and a two-step minimum bit-error-rate (BER) adaptation algorithm. A high energy efficiency of 0.46 mW/Gbps is made possible by the combination of a direct-feedback finite-impulse-response (FIR) DFE, an infinite-impulse-response (IIR) DFE, and a clock-and-data recovery (CDR) circuit with adjustable timing offsets. Based on this architecture, the power-hungry stages used in prior DFE receivers such as the continuous-time linear equalizer (CTLE), the current-mode summing circuit for a multitap DFE, and the fast selection logic for a loop-unrolling DFE can all be removed. A two-step adaptation algorithm that finds the equalizer coefficients minimizing the BER is described. First, an extra data sampler with adjustable voltage and timing offsets measures the single-bit response (SBR) of the channel and coarsely tunes the initial coefficient values in the foreground. Next, the same circuit measures the eye-opening and bit-error rates and fine tunes the coefficients in background using a stochastic hill-climbing algorithm. A prototype DFE receiver fabricated in a 65-nm LP/RF CMOS dissipates 2.3 mW and demonstrates measured eye-opening values of 174 mV pp and 0.66 UIpp while operating at 5 Gb/s with a -15-dB loss channel.",
"title": ""
},
{
"docid": "e9326cb2e3b79a71d9e99105f0259c5a",
"text": "Although drugs are intended to be selective, at least some bind to several physiological targets, explaining side effects and efficacy. Because many drug–target combinations exist, it would be useful to explore possible interactions computationally. Here we compared 3,665 US Food and Drug Administration (FDA)-approved and investigational drugs against hundreds of targets, defining each target by its ligands. Chemical similarities between drugs and ligand sets predicted thousands of unanticipated associations. Thirty were tested experimentally, including the antagonism of the β1 receptor by the transporter inhibitor Prozac, the inhibition of the 5-hydroxytryptamine (5-HT) transporter by the ion channel drug Vadilex, and antagonism of the histamine H4 receptor by the enzyme inhibitor Rescriptor. Overall, 23 new drug–target associations were confirmed, five of which were potent (<100 nM). The physiological relevance of one, the drug N,N-dimethyltryptamine (DMT) on serotonergic receptors, was confirmed in a knockout mouse. The chemical similarity approach is systematic and comprehensive, and may suggest side-effects and new indications for many drugs.",
"title": ""
},
{
"docid": "8f137f55376693eeedb8fc5b1e86518a",
"text": "Previous studies have shown that both αA- and αB-crystallins bind Cu2+, suppress the formation of Cu2+-mediated active oxygen species, and protect ascorbic acid from oxidation by Cu2+. αA- and αB-crystallins are small heat shock proteins with molecular chaperone activity. In this study we show that the mini-αA-crystallin, a peptide consisting of residues 71-88 of αA-crystallin, prevents copper-induced oxidation of ascorbic acid. Evaluation of binding of copper to mini-αA-crystallin showed that each molecule of mini-αA-crystallin binds one copper molecule. Isothermal titration calorimetry and nanospray mass spectrometry revealed dissociation constants of 10.72 and 9.9 μM, respectively. 1,1'-Bis(4-anilino)naphthalene-5,5'-disulfonic acid interaction with mini-αA-crystallin was reduced after binding of Cu2+, suggesting that the same amino acids interact with these two ligands. Circular dichroism spectrometry showed that copper binding to mini-αA-crystallin peptide affects its secondary structure. Substitution of the His residue in mini-αA-crystallin with Ala abolished the redox-suppression activity of the peptide. During the Cu2+-induced ascorbic acid oxidation assay, a deletion mutant, αAΔ70-77, showed about 75% loss of ascorbic acid protection compared to the wild-type αA-crystallin. This difference indicates that the 70-77 region is the primary Cu2+-binding site(s) in human native full-size αA-crystallin. The role of the chaperone site in Cu2+ binding in native αA-crystallin was confirmed by the significant loss of chaperone activity by the peptide after Cu2+ binding.",
"title": ""
},
{
"docid": "565efa7a51438990b3d8da6222dca407",
"text": "The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works.",
"title": ""
},
{
"docid": "28c3e990b40b62069010e0a7f94adb11",
"text": "Steep sub-threshold transistors are promising candidates to replace the traditional MOSFETs for sub-threshold leakage reduction. In this paper, we explore the use of Inter-Band Tunnel Field Effect Transistors (TFETs) in SRAMs at ultra low supply voltages. The uni-directional current conducting TFETs limit the viability of 6T SRAM cells. To overcome this limitation, 7T SRAM designs were proposed earlier at the cost of extra silicon area. In this paper, we propose a novel 6T SRAM design using Si-TFETs for reliable operation with low leakage at ultra low voltages. We also demonstrate that a functional 6T TFET SRAM design with comparable stability margins and faster performances at low voltages can be realized using proposed design when compared with the 7T TFET SRAM cell. We achieve a leakage reduction improvement of 700X and 1600X over traditional CMOS SRAM designs at VDD of 0.3V and 0.5V respectively which makes it suitable for use at ultra-low power applications.",
"title": ""
},
{
"docid": "ec4b7c50f3277bb107961c9953fe3fc4",
"text": "A blockchain is a linked-list of immutable tamper-proof blocks, which is stored at each participating node. Each block records a set of transactions and the associated metadata. Blockchain transactions act on the identical ledger data stored at each node. Blockchain was first perceived by Satoshi Nakamoto (Satoshi 2008), as a peer-to-peer money exchange system. Nakamoto referred to the transactional tokens exchanged among clients in his system, as Bitcoins. Overview",
"title": ""
},
{
"docid": "55a29653163bdf9599bf595154a99a25",
"text": "The effect of the steel slag aggregate aging on mechanical properties of the high performance concrete is analysed in the paper. The effect of different aging periods of steel slag aggregate on mechanical properties of high performance concrete is studied. It was observed that properties of this concrete are affected by the steel slag aggregate aging process. The compressive strength increases with an increase in the aging period of steel slag aggregate. The flexural strength, Young’s modulus, and impact strength of concrete, increase at the rate similar to that of the compressive strength. The workability and the abrasion loss of concrete decrease with an increase of the steel slag aggregate aging period.",
"title": ""
},
{
"docid": "aff504d1c2149d13718595fd3e745eb0",
"text": "Figure 1 illustrates a typical example of a prediction problem: given some noisy observations of a dependent variable at certain values of the independent variable , what is our best estimate of the dependent variable at a new value, ? If we expect the underlying function to be linear, and can make some assumptions about the input data, we might use a least-squares method to fit a straight line (linear regression). Moreover, if we suspect may also be quadratic, cubic, or even nonpolynomial, we can use the principles of model selection to choose among the various possibilities. Gaussian process regression (GPR) is an even finer approach than this. Rather than claiming relates to some specific models (e.g. ), a Gaussian process can represent obliquely, but rigorously, by letting the data ‘speak’ more clearly for themselves. GPR is still a form of supervised learning, but the training data are harnessed in a subtler way. As such, GPR is a less ‘parametric’ tool. However, it’s not completely free-form, and if we’re unwilling to make even basic assumptions about , then more general techniques should be considered, including those underpinned by the principle of maximum entropy; Chapter 6 of Sivia and Skilling (2006) offers an introduction.",
"title": ""
},
{
"docid": "a1d96f46cd4fa625da9e1bf2f6299c81",
"text": "The availability of increasingly higher power commercial microwave monolithic integrated circuit (MMIC) amplifiers enables the construction of solid state amplifiers achieving output powers and performance previously achievable only from traveling wave tube amplifiers (TWTAs). A high efficiency power amplifier incorporating an antipodal finline antenna array within a coaxial waveguide is investigated at Ka Band. The coaxial waveguide combiner structure is used to demonstrate a 120 Watt power amplifier from 27 to 31GHz by combining quantity (16), 10 Watt GaN MMIC devices; achieving typical PAE of 25% for the overall power amplifier assembly.",
"title": ""
},
{
"docid": "fb58d6fe77092be4bce5dd0926c563de",
"text": "We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with further data exploration and hypothesis generation. MGM extracts distinguishing features on real-world datasets of animal features, recipes ingredients, and disease co-occurrence. It also maintains or improves performance when compared to related approaches. We perform a user study with domain experts to show the MGM’s ability to help with dataset exploration.",
"title": ""
},
{
"docid": "d41cd48a377afa6b95598d2df6a27b08",
"text": "Graph-based approaches have been most successful in semisupervised learning. In this paper, we focus on label propagation in graph-based semisupervised learning. One essential point of label propagation is that the performance is heavily affected by incorporating underlying manifold of given data into the input graph. The other more important point is that in many recent real-world applications, the same instances are represented by multiple heterogeneous data sources. A key challenge under this setting is to integrate different data representations automatically to achieve better predictive performance. In this paper, we address the issue of obtaining the optimal linear combination of multiple different graphs under the label propagation setting. For this problem, we propose a new formulation with the sparsity (in coefficients of graph combination) property which cannot be rightly achieved by any other existing methods. This unique feature provides two important advantages: 1) the improvement of prediction performance by eliminating irrelevant or noisy graphs and 2) the interpretability of results, i.e., easily identifying informative graphs on classification. We propose efficient optimization algorithms for the proposed approach, by which clear interpretations of the mechanism for sparsity is provided. Through various synthetic and two real-world data sets, we empirically demonstrate the advantages of our proposed approach not only in prediction performance but also in graph selection ability.",
"title": ""
},
{
"docid": "7bc2bacc409341415c8ac9ca3c617c9b",
"text": "Many tasks in artificial intelligence require the collaboration of multiple agents. We exam deep reinforcement learning for multi-agent domains. Recent research efforts often take the form of two seemingly conflicting perspectives, the decentralized perspective, where each agent is supposed to have its own controller; and the centralized perspective, where one assumes there is a larger model controlling all agents. In this regard, we revisit the idea of the master-slave architecture by incorporating both perspectives within one framework. Such a hierarchical structure naturally leverages advantages from one another. The idea of combining both perspectives is intuitive and can be well motivated from many real world systems, however, out of a variety of possible realizations, we highlights three key ingredients, i.e. composed action representation, learnable communication and independent reasoning. With network designs to facilitate these explicitly, our proposal consistently outperforms latest competing methods both in synthetic experiments and when applied to challenging StarCraft1 micromanagement tasks.",
"title": ""
}
] |
scidocsrr
|
2b2f2af64ba9a552e51b0632e6cf170c
|
BASE: Using Abstraction to Improve Fault Tolerance
|
[
{
"docid": "b8b7abcef8e23f774bd4e74067a27e6f",
"text": "This note evaluates several hardware platforms and operating systems using a set of benchmarks that test memory bandwidth and various operating system features such as kernel entry/exit and file systems. The overall conclusion is that operating system performance does not seem to be improving at the same rate as the base speed of the underlying hardware. Copyright 1989 Digital Equipment Corporation d i g i t a l Western Research Laboratory 100 Hamilton Avenue Palo Alto, California 94301 USA",
"title": ""
}
] |
[
{
"docid": "83d50f7c66b14116bfa627600ded28d6",
"text": "Diet can affect cognitive ability and behaviour in children and adolescents. Nutrient composition and meal pattern can exert immediate or long-term, beneficial or adverse effects. Beneficial effects mainly result from the correction of poor nutritional status. For example, thiamin treatment reverses aggressiveness in thiamin-deficient adolescents. Deleterious behavioural effects have been suggested; for example, sucrose and additives were once suspected to induce hyperactivity, but these effects have not been confirmed by rigorous investigations. In spite of potent biological mechanisms that protect brain activity from disruption, some cognitive functions appear sensitive to short-term variations of fuel (glucose) availability in certain brain areas. A glucose load, for example, acutely facilitates mental performance, particularly on demanding, long-duration tasks. The mechanism of this often described effect is not entirely clear. One aspect of diet that has elicited much research in young people is the intake/omission of breakfast. This has obvious relevance to school performance. While effects are inconsistent in well-nourished children, breakfast omission deteriorates mental performance in malnourished children. Even intelligence scores can be improved by micronutrient supplementation in children and adolescents with very poor dietary status. Overall, the literature suggests that good regular dietary habits are the best way to ensure optimal mental and behavioural performance at all times. Then, it remains controversial whether additional benefit can be gained from acute dietary manipulations. In contrast, children and adolescents with poor nutritional status are exposed to alterations of mental and/or behavioural functions that can be corrected, to a certain extent, by dietary measures.",
"title": ""
},
{
"docid": "e82e4599a7734c9b0292a32f551dd411",
"text": "Generating a text abstract from a set of documents remains a challenging task. The neural encoder-decoder framework has recently been exploited to summarize single documents, but its success can in part be attributed to the availability of large parallel data automatically acquired from the Web. In contrast, parallel data for multi-document summarization are scarce and costly to obtain. There is a pressing need to adapt an encoder-decoder model trained on single-document summarization data to work with multiple-document input. In this paper, we present an initial investigation into a novel adaptation method. It exploits the maximal marginal relevance method to select representative sentences from multi-document input, and leverages an abstractive encoder-decoder model to fuse disparate sentences to an abstractive summary. The adaptation method is robust and itself requires no training data. Our system compares favorably to state-of-the-art extractive and abstractive approaches judged by automatic metrics and human assessors.",
"title": ""
},
{
"docid": "6bb9df7f37426563a373fae6dd46db66",
"text": "Hyper-heuristics comprise a set of approaches that are motivated (at least in part) by the goal of automating the design of heuristic methods to solve hard computational search problems. An underlying strategic research challenge is to develop more generally applicable search methodologies. The term hyper-heuristic is relatively new; it was first used in 2000 to describe heuristics to choose heuristics in the context of combinatorial optimisation. However, the idea of automating the design of heuristics is not new; it can be traced back to the 1960s. The definition of hyper-heuristics has been recently extended to refer to a search method or learning mechanism for selecting or generating heuristics to solve computational search problems. Two main hyper-heuristic categories can be considered: heuristic selection and heuristic generation. The distinguishing feature of hyper-heuristics is that they operate on a search space of heuristics (or heuristic components) rather than directly on the search space of solutions to the underlying problem that is being addressed. This paper presents a critical discussion of the scientific literature on hyper-heuristics including their origin and intellectual roots, a detailed account of the main types of approaches, and an overview of some related areas. Current research trends and directions for future research are also discussed. Journal of the Operational Research Society advance online publication, 10 July 2013; doi:10.1057/jors.2013.71",
"title": ""
},
{
"docid": "2e1385c5398196fbe9a108f241712c01",
"text": "The concept of deliberate practice was introduced to explain exceptional performance in domains such as music and chess. We apply deliberate practice theory to intermediate-level performance in typing, an activity that many people pursue on a regular basis. Sixty university students with several years typing experience participated in laboratory sessions that involved the assessment of abilities, a semistructured interview on typing experience as well as various typing tasks. In line with traditional theories of skill acquisition, experience (amount of typing since introduction to the keyboard) was related to typing performance. A perceptual speed test (digit-symbol substitution) and a measure of motor abilities (tapping) were not significantly related to performance. In line with deliberate practice theory, the highest level of performance was reported among participants who had attended a typing class in the past and who reported to adopt the goal of typing quickly during everyday typing. Findings suggest that even after several years of experience engagement in an everyday activity can serve as an opportunity for further skill improvement if individuals are willing to push themselves.",
"title": ""
},
{
"docid": "bd6f23972644f6239ab1a40e9b20aa1e",
"text": "This paper presents a machine-learning software solution that performs a multi-dimensional prediction of QoE (Quality of Experience) based on network-related SIFs (System Influence Factors) as input data. The proposed solution is verified through experimental study based on video streaming emulation over LTE (Long Term Evolution) which allows the measurement of network-related SIF (i.e., delay, jitter, loss), and subjective assessment of MOS (Mean Opinion Score). Obtained results show good performance of proposed MOS predictor in terms of mean prediction error and thereby can serve as an encouragement to implement such solution in all-IP (Internet Protocol) real environment.",
"title": ""
},
{
"docid": "4277894ef2bf88fd3a78063a8b0cc7fe",
"text": "This paper deals with a design method of LCL filter for grid-connected three-phase PWM voltage source inverters (VSI). By analyzing the total harmonic distortion of the current (THDi) in the inverter-side inductor and the ripple attenuation factor of the current (RAF) injected to the grid through the LCL network, the parameter of LCL can be clearly designed. The described LCL filter design method is verified by showing a good agreement between the target current THD and the actual one through simulation and experiment.",
"title": ""
},
{
"docid": "1e69c1aef1b194a27d150e45607abd5a",
"text": "Methods of semantic relatedness are essential for wide range of tasks such as information retrieval and text mining. This paper, concerned with these automated methods, attempts to improve Gloss Vector semantic relatedness measure for more reliable estimation of relatedness between two input concepts. Generally, this measure by considering frequency cut-off for big rams tries to remove low and high frequency words which usually do not end up being significant features. However, this naive cutting approach can lead to loss of valuable information. By employing point wise mutual information (PMI) as a measure of association between features, we will try to enforce the foregoing elimination step in a statistical fashion. Applying both approaches to the biomedical domain, using MEDLINE as corpus, MeSH as thesaurus, and available reference standard of 311 concept pairs manually rated for semantic relatedness, we will show that PMI for removing insignificant features is more effective approach than frequency cut-off.",
"title": ""
},
{
"docid": "ac6430e097fb5a7dc1f7864f283dcf47",
"text": "In the task of Object Recognition, there exists a dichotomy between the categorization of objects and estimating object pose, where the former necessitates a view-invariant representation, while the latter requires a representation capable of capturing pose information over different categories of objects. With the rise of deep architectures, the prime focus has been on object category recognition. Deep learning methods have achieved wide success in this task. In contrast, object pose regression using these approaches has received relatively much less attention. In this paper we show how deep architectures, specifically Convolutional Neural Networks (CNN), can be adapted to the task of simultaneous categorization and pose estimation of objects. We investigate and analyze the layers of various CNN models and extensively compare between them with the goal of discovering how the layers of distributed representations of CNNs represent object pose information and how this contradicts object category representations. We extensively experiment on two recent large and challenging multi-view datasets. Our models achieve better than state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "076c5e6d8d6822988c64cabf8e6d4289",
"text": "This paper presents the design of a dual-polarized log.-periodic four arm antenna bent on a conical MID substrate. The bending of a planar structure in free space is highlighted and the resulting effects on the input impedance and radiation characteristic are analyzed. The subsequent design of the UWB compliant prototype is introduced. An adequate agreement between simulated and measured performance can be observed. The antenna provides an input matching of better than −8 dB over a frequency range from 3GHz to 9GHz. The antenna pattern is characterized by a radiation with two linear, orthogonal polarizations and a front-to-back ratio of 6 dB. A maximum gain of 5.6 dBi is achieved at 5.5GHz. The pattern correlation coefficients confirm the suitability of this structure for diversity and MIMO applications. The overall antenna diameter and height are 50mm and 24mm respectively. It could therefore be used as a surface mounted or ceiling antenna in buildings, vehicles or aircrafts for communication systems.",
"title": ""
},
{
"docid": "64330f538b3d8914cbfe37565ab0d648",
"text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.",
"title": ""
},
{
"docid": "132af507da095adf4f07ec0248d34cc2",
"text": "This project is to design eight bit division algorithm program by using Xilinx ISE 10.1 software for simulation algorithm circuit partitioning through hardware Field Programmable Gate Array (FPGA). The algorithms are divide 8-bit dividend by 8-bit divisor for input and get the result 16-bit for the output. Circuit partitioning algorithms eight bits used to implement the distribution process for each program using the arithmetic and logic unit operations, called (ALU). All these operations using Verilog language in a program to be displayed on (LED) using the FPGA board. FPGA is a semiconductor device containing programmable logic components called \"logic blocks\", and programmable. Logic block can be programmed to perform the functions of basic logic gates such as AND, and XOR, or more complex combination of functions such as decoders or simple mathematical functions such as addition, subtraction, multiplication, and division (+, -, x, ÷). Finally, this project outlines the design and implementation of a new hardware divisor for performing 8-bit division. The error probability function of this division algorithm is fully characterized and contrasted against existing hardware division algorithms.",
"title": ""
},
{
"docid": "ca88e6aab6f65f04bfc7a7eb470a31e1",
"text": "We construct protocols for secure multiparty computation with the help of a computationally powerful party, namely the “cloud”. Our protocols are simultaneously efficient in a number of metrics: • Rounds: our protocols run in 4 rounds in the semi-honest setting, and 5 rounds in the malicious setting. • Communication: the number of bits exchanged in an execution of the protocol is independent of the complexity of function f being computed, and depends only on the length of the inputs and outputs. • Computation: the computational complexity of all parties is independent of the complexity of the function f , whereas that of the cloud is linear in the size of the circuit computing f . In the semi-honest case, our protocol relies on the “ring learning with errors” (RLWE) assumption, whereas in the malicious case, security is shown under the Ring LWE assumption as well as the existence of simulation-extractable NIZK proof systems and succinct non-interactive arguments. In the malicious setting, we also relax the communication and computation requirements above, and only require that they be “small” – polylogarithmic in the computation size and linear in the size of the joint size of the inputs. Our constructions leverage the key homomorphic property of the recent fully homomorphic encryption scheme of Brakerski and Vaikuntanathan (CRYPTO 2011, FOCS 2011). Namely, these schemes allow combining encryptions of messages under different keys to produce an encryption (of the sum of the messages) under the sum of the keys. We also design an efficient, non-interactive threshold decryption protocol for these fully homomorphic encryption schemes. ∗This work was partially supported by the Check Point Institute for Information Security and by the Israeli Centers of Research Excellence (I-CORE) program (center No. 4/11). †This work was partially supported by an NSERC Discovery Grant, by DARPA under Agreement number FA875011-2-0225. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government.",
"title": ""
},
{
"docid": "8bd9e3fe5d2b6fe8d58a86baf3de3522",
"text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.",
"title": ""
},
{
"docid": "fc1009e9515d83166e97e4e01ae9ca69",
"text": "In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD) and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset (CGD) that has a total of more than 50000 gestures for the \"one-shot-learning\" competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences. Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for \"user independent\" gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented.",
"title": ""
},
{
"docid": "3e8535bc48ce88ba6103a68dd3ad1d5d",
"text": "This letter reports the concept and design of the active-braid, a novel bioinspired continuum manipulator with the ability to contract, extend, and bend in three-dimensional space with varying stiffness. The manipulator utilizes a flexible crossed-link helical array structure as its main supporting body, which is deformed by using two radial actuators and a total of six longitudinal tendons, analogously to the three major types of muscle layers found in muscular hydrostats. The helical array structure ensures that the manipulator behaves similarly to a constant volume structure (expanding while shortening and contracting while elongating). Numerical simulations and experimental prototypes are used in order to evaluate the feasibility of the concept.",
"title": ""
},
{
"docid": "6dbe972f08097355b32685c5793f853a",
"text": "BACKGROUND/AIMS\nRheumatoid arthritis (RA) is a serious health problem resulting in significant morbidity and disability. Tai Chi may be beneficial to patients with RA as a result of effects on muscle strength and 'mind-body' interactions. To obtain preliminary data on the effects of Tai Chi on RA, we conducted a pilot randomized controlled trial. Twenty patients with functional class I or II RA were randomly assigned to Tai Chi or attention control in twice-weekly sessions for 12 weeks. The American College of Rheumatology (ACR) 20 response criterion, functional capacity, health-related quality of life and the depression index were assessed.\n\n\nRESULTS\nAt 12 weeks, 5/10 patients (50%) randomized to Tai Chi achieved an ACR 20% response compared with 0/10 (0%) in the control (p = 0.03). Tai Chi had greater improvement in the disability index (p = 0.01), vitality subscale of the Medical Outcome Study Short Form 36 (p = 0.01) and the depression index (p = 0.003). Similar trends to improvement were also observed for disease activity, functional capacity and health-related quality of life. No adverse events were observed and no patients withdrew from the study.\n\n\nCONCLUSION\nTai Chi appears safe and may be beneficial for functional class I or II RA. These promising results warrant further investigation into the potential complementary role of Tai Chi for treatment of RA.",
"title": ""
},
{
"docid": "9b3adcf557ce2d3f6b3cb717694f9596",
"text": "BACKGROUND\nVariation in physician adoption of new medications is poorly understood. Traditional approaches (eg, measuring time to first prescription) may mask substantial heterogeneity in technology adoption.\n\n\nOBJECTIVE\nApply group-based trajectory models to examine the physician adoption of dabigratran, a novel anticoagulant.\n\n\nMETHODS\nA retrospective cohort study using prescribing data from IMS Xponent™ on all Pennsylvania physicians regularly prescribing anticoagulants (n=3911) and data on their characteristics from the American Medical Association Masterfile. We examined time to first dabigatran prescription and group-based trajectory models to identify adoption trajectories in the first 15 months. Factors associated with rapid adoption were examined using multivariate logistic regressions.\n\n\nOUTCOMES\nTrajectories of monthly share of oral anticoagulant prescriptions for dabigatran.\n\n\nRESULTS\nWe identified 5 distinct adoption trajectories: 3.7% rapidly and extensively adopted dabigatran (adopting in ≤3 mo with 45% of prescriptions) and 13.4% were rapid and moderate adopters (≤3 mo with 20% share). Two groups accounting for 21.6% and 16.1% of physicians, respectively, were slower to adopt (6-10 mo post-introduction) and dabigatran accounted for <10% share. Nearly half (45.2%) of anticoagulant prescribers did not adopt dabigatran. Cardiologists were much more likely than primary care physicians to rapidly adopt [odds ratio (OR)=12.2; 95% confidence interval (CI), 9.27-16.1] as were younger prescribers (age 36-45 y: OR=1.49, 95% CI, 1.13-1.95; age 46-55: OR=1.34, 95% CI, 1.07-1.69 vs. >55 y).\n\n\nCONCLUSIONS\nTrajectories of physician adoption of dabigatran were highly variable with significant differences across specialties. Heterogeneity in physician adoption has potential implications for the cost and effectiveness of treatment.",
"title": ""
},
{
"docid": "91f390e8ea6c931dff1e1d171cede590",
"text": "Deep neural networks are state of the art methods for many learning tasks due to their ability to extract increasingly better features at each network layer. However, the improved performance of additional layers in a deep network comes at the cost of added latency and energy usage in feedforward inference. As networks continue to get deeper and larger, these costs become more prohibitive for real-time and energy-sensitive applications. To address this issue, we present BranchyNet, a novel deep network architecture that is augmented with additional side branch classifiers. The architecture allows prediction results for a large portion of test samples to exit the network early via these branches when samples can already be inferred with high confidence. BranchyNet exploits the observation that features learned at an early layer of a network may often be sufficient for the classification of many data points. For more difficult samples, which are expected less frequently, BranchyNet will use further or all network layers to provide the best likelihood of correct prediction. We study the BranchyNet architecture using several well-known networks (LeNet, AlexNet, ResNet) and datasets (MNIST, CIFAR10) and show that it can both improve accuracy and significantly reduce the inference time of the network.",
"title": ""
},
{
"docid": "7700935aeb818b8c863747c0624764db",
"text": "The Internal Model Control (IMC) is a transparent framework for designing and tuning the controller. The proportional-integral (PI) and proportional-integral derivative (PID) controllers have ability to meet most of the control objectives and this led to their widespread acceptance in the control industry. In this paper the IMC-based PID controller is designed. IMC-based PID tuning method is a trade-off between closed-loop performance and robustness to model inaccuracies achieved with a single tuning parameter λ. The IMC-PID controller shows good set-point tracking property. In this paper, Robust stability synthesis of a class of uncertain parameter varying firstorder time-delay systems is presented. The output response characteristics using IMC based PID controller along with characteristics using automatic PID tuner are compared. The performance of IMC based PID for both stable, unstable as well as for the processes with time delay is studied and discussed. Various Order reduction techniques are utilized to reduce higher order polynomial into smaller order transfer function. This paper presents results of the implementation of an Internal Model Control (IMC) based PID controller for the level control application to meet robust performance and to achieve the set point tracking and disturbance rejection.",
"title": ""
},
{
"docid": "b53b1bf8c9cd562ee3bf32324d7ceae3",
"text": "In this paper we present our results on using electromyographic (EMG) sensor arrays for finger gesture recognition. Sensing muscle activity allows to capture finger motion without placing sensors directly at the hand or fingers and thus may be used to build unobtrusive body-worn interfaces. We use an electrode array with 192 electrodes to record a high-density EMG of the upper forearm muscles. We present in detail a baseline system for gesture recognition on our dataset, using a naive Bayes classifier to discriminate the 27 gestures. We recorded 25 sessions from 5 subjects. We report an average accuracy of 90% for the within-session scenario, showing the feasibility of the EMG approach to discriminate a large number of subtle gestures. We analyze the effect of the number of used electrodes on the recognition performance and show the benefit of using high numbers of electrodes. Cross-session recognition typically suffers from electrode position changes from session to session. We present two methods to estimate the electrode shift between sessions based on a small amount of calibration data and compare it to a baseline system with no shift compensation. The presented methods raise the accuracy from 59% baseline accuracy to 75% accuracy after shift compensation. The dataset is publicly available.",
"title": ""
}
] |
scidocsrr
|
7ff2b333260bdd17508da12bebfd92a6
|
Mistaking minds and machines: How speech affects dehumanization and anthropomorphism.
|
[
{
"docid": "446fa2bda9922dfd9c18b1c49520dff3",
"text": "Anthropomorphism describes the tendency to imbue the real or imagined behavior of nonhuman agents with humanlike characteristics, motivations, intentions, or emotions. Although surprisingly common, anthropomorphism is not invariant. This article describes a theory to explain when people are likely to anthropomorphize and when they are not, focused on three psychological determinants--the accessibility and applicability of anthropocentric knowledge (elicited agent knowledge), the motivation to explain and understand the behavior of other agents (effectance motivation), and the desire for social contact and affiliation (sociality motivation). This theory predicts that people are more likely to anthropomorphize when anthropocentric knowledge is accessible and applicable, when motivated to be effective social agents, and when lacking a sense of social connection to other humans. These factors help to explain why anthropomorphism is so variable; organize diverse research; and offer testable predictions about dispositional, situational, developmental, and cultural influences on anthropomorphism. Discussion addresses extensions of this theory into the specific psychological processes underlying anthropomorphism, applications of this theory into robotics and human-computer interaction, and the insights offered by this theory into the inverse process of dehumanization.",
"title": ""
}
] |
[
{
"docid": "c23a86bc6d8011dab71ac5e1e2051c3b",
"text": "The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher’s flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average memory usage of AlexNet by 61% and OverFeat by 83%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA K40 GPU card containing 12 GB of memory, with 22% performance loss compared to a hypothetical GPU with enough memory to hold the entire DNN.",
"title": ""
},
{
"docid": "787ed9e3816d70ceb04a366d9d0cb51e",
"text": "We propose a novel method for binarization of color documents whereby the foreground text is output as black and the background as white regardless of the polarity of foreground-background shades. The method employs an edge-based connected component approach and automatically determines a threshold for each component. It has several advantages over existing binarization methods. Firstly, it can handle documents with multi-colored texts with different background shades. Secondly, the method is applicable to documents having text of widely varying sizes, usually not handled by local binarization methods. Thirdly, the method automatically computes the threshold for binarization and the logic for inverting the output from the image data and does not require any input parameter. The proposed method has been applied to a broad domain of target document types and environment and is found to have a good adaptability.",
"title": ""
},
{
"docid": "6f31b0ba60dccb6f1c4ac3e4161f8a44",
"text": "In this work, we propose an alternative solution for parallel wave generation by WaveNet. In contrast to parallel WaveNet (Oord et al., 2018), we distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet by minimizing a novel regularized KL divergence between their highly-peaked output distributions. Our method computes the KL divergence in closed-form, which simplifies the training algorithm and provides very efficient distillation. In addition, we propose the first text-to-wave neural architecture for speech synthesis, which is fully convolutional and enables fast end-to-end training from scratch. It significantly outperforms the previous pipeline that connects a text-to-spectrogram model to a separately trained WaveNet (Ping et al., 2018). We also successfully distill a parallel waveform synthesizer conditioned on the hidden representation in this end-to-end model. 2",
"title": ""
},
{
"docid": "fe48a551dfbe397b7bcf52e534dfcf00",
"text": "This meta-analysis of 12 dependent variables from 9 quantitative studies comparing music to no-music conditions during treatment of children and adolescents with autism resulted in an overall effect size of d =.77 and a mean weighted correlation of r =.36 (p =.00). Since the confidence interval did not include 0, results were considered to be significant. All effects were in a positive direction, indicating benefits of the use of music in intervention. The homogeneity Q value was not significant (p =.83); therefore, results of included studies are considered to be homogeneous and explained by the overall effect size. The significant effect size, combined with the homogeneity of the studies, leads to the conclusion that all music intervention, regardless of purpose or implementation, has been effective for children and adolescents with autism. Included studies are described in terms of type of dependent variables measured; theoretical approach; number of subjects in treatment sessions; participation in and use, selection, and presentation of music; researcher discipline; published or unpublished source; and subject age. Clinical implications as well as recommendations for future research are discussed.",
"title": ""
},
{
"docid": "a9d5e1a113052c00823ebf6145ec38e6",
"text": "Deploying an automatic speech recognition system with reasonable performance requires expensive and time-consuming in-domain transcription. Previous work demonstrated that non-professional annotation through Amazon’s Mechanical Turk can match professional quality. We use Mechanical Turk to transcribe conversational speech for as little as one thirtieth the cost of professional transcription. The higher disagreement of nonprofessional transcribers does not have a significant effect on system performance. While previous work demonstrated that redundant transcription can improve data quality, we found that resources are better spent collecting more data. Finally, we suggest a concrete method for quality control without needing professional transcription.",
"title": ""
},
{
"docid": "749294846f355424b1c360b21e054fea",
"text": "BACKGROUND\nResults of small trials suggest that early interventions for social communication are effective for the treatment of autism in children. We therefore investigated the efficacy of such an intervention in a larger trial.\n\n\nMETHODS\nChildren with core autism (aged 2 years to 4 years and 11 months) were randomly assigned in a one-to-one ratio to a parent-mediated communication-focused (Preschool Autism Communication Trial [PACT]) intervention or treatment as usual at three specialist centres in the UK. Those assigned to PACT were also given treatment as usual. Randomisation was by use of minimisation of probability in the marginal distribution of treatment centre, age (</=42 months or >42 months), and autism severity (Autism Diagnostic Observation Schedule-Generic [ADOS-G] algorithm score 12-17 or 18-24). Primary outcome was severity of autism symptoms (a total score of social communication algorithm items from ADOS-G, higher score indicating greater severity) at 13 months. Complementary secondary outcomes were measures of parent-child interaction, child language, and adaptive functioning in school. Analysis was by intention to treat. This study is registered as an International Standard Randomised Controlled Trial, number ISRCTN58133827.\n\n\nRESULTS\n152 children were recruited. 77 were assigned to PACT (London [n=26], Manchester [n=26], and Newcastle [n=25]); and 75 to treatment as usual (London [n=26], Manchester [n=26], and Newcastle [n=23]). At the 13-month endpoint, the severity of symptoms was reduced by 3.9 points (SD 4.7) on the ADOS-G algorithm in the group assigned to PACT, and 2.9 (3.9) in the group assigned to treatment as usual, representing a between-group effect size of -0.24 (95% CI -0.59 to 0.11), after adjustment for centre, sex, socioeconomic status, age, and verbal and non-verbal abilities. Treatment effect was positive for parental synchronous response to child (1.22, 0.85 to 1.59), child initiations with parent (0.41, 0.08 to 0.74), and for parent-child shared attention (0.33, -0.02 to 0.68). Effects on directly assessed language and adaptive functioning in school were small.\n\n\nINTERPRETATION\nOn the basis of our findings, we cannot recommend the addition of the PACT intervention to treatment as usual for the reduction of autism symptoms; however, a clear benefit was noted for parent-child dyadic social communication.\n\n\nFUNDING\nUK Medical Research Council, and UK Department for Children, Schools and Families.",
"title": ""
},
{
"docid": "565ba6935c4fd6afdb4d393553a70d0b",
"text": "This paper presents the problem definition and guidelines of the next generation stru control benchmark problem for seismically excited buildings. Focusing on a 20-story steel s ture representing a typical midto high-rise building designed for the Los Angeles, Califo region, the goal of this study is to provide a clear basis to evaluate the efficacy of various tural control strategies. An evaluationmodel has been developed that portrays the salient feat of the structural system. Control constraints and evaluation criteria are presented for the problem. The task of each participant in this benchmark study is to define (including devices sors and control algorithms), evaluate and report on their proposed control strategies. Thes egies may be either passive, active, semi-active or a combination thereof. A simulation pro has been developed and made available to facilitate direct comparison of the efficiency and of the various control strategies. To illustrate some of the design challenges a sample contr tem design is presented, although this sample is not intended to be viewed as a comp design. Introduction The protection of civil structures, including material content and human occupants, is out a doubt a world-wide priority. The extent of protection may range from reliable operation occupant comfort to human and structural survivability. Civil structures, including existing future buildings, towers and bridges, must be adequately protected from a variety of e including earthquakes, winds, waves and traffic. The protection of structures is now moving relying entirely on the inelastic deformation of the structure to dissipate the energy of s dynamic loadings, to the application of passive, active and semi-active structural control de to mitigate undesired responses to dynamic loads. In the last two decades, many control algorithms and devices have been proposed fo engineering applications (Soong 1990; Housner, et al. 1994; Soong and Constantinou 199 Fujino,et al. 1996; Spencer and Sain 1997), each of which has certain advantages, depend the specific application and the desired objectives. At the present time, structural control res is greatly diversified with regard to these specific applications and desired objectives. A com basis for comparison of the various algorithms and devices does not currently exist. Deter 1. Prof., Dept. of Civil Engrg. and Geo. Sci., Univ. of Notre Dame, Notre Dame, IN 46556-0767. 2. Doc. Cand., Dept. of Civil Engrg. and Geo. Sci., Univ. of Notre Dame, Notre Dame, IN 46556-0767. 3. Assist. Prof., Dept. of Civil Engrg., Washington Univ., St. Louis, MO 63130-4899. March 22, 1999 1 Spencer, et al.",
"title": ""
},
{
"docid": "021d51e8152d2e2a9a834b5838139605",
"text": "Social networking sites (SNSs) have gained substantial popularity among youth in recent years. However, the relationship between the use of these Web-based platforms and mental health problems in children and adolescents is unclear. This study investigated the association between time spent on SNSs and unmet need for mental health support, poor self-rated mental health, and reports of psychological distress and suicidal ideation in a representative sample of middle and high school children in Ottawa, Canada. Data for this study were based on 753 students (55% female; Mage=14.1 years) in grades 7-12 derived from the 2013 Ontario Student Drug Use and Health Survey. Multinomial logistic regression was used to examine the associations between mental health variables and time spent using SNSs. Overall, 25.2% of students reported using SNSs for more than 2 hours every day, 54.3% reported using SNSs for 2 hours or less every day, and 20.5% reported infrequent or no use of SNSs. Students who reported unmet need for mental health support were more likely to report using SNSs for more than 2 hours every day than those with no identified unmet need for mental health support. Daily SNS use of more than 2 hours was also independently associated with poor self-rating of mental health and experiences of high levels of psychological distress and suicidal ideation. The findings suggest that students with poor mental health may be greater users of SNSs. These results indicate an opportunity to enhance the presence of health service providers on SNSs in order to provide support to youth.",
"title": ""
},
{
"docid": "cba9f80ab39de507e84b68dc598d0bb9",
"text": "In this paper we construct a noncommutative space of “pointed Drinfeld modules” that generalizes to the case of function fields the noncommutative spaces of commensurability classes of Q-lattices. It extends the usual moduli spaces of Drinfeld modules to possibly degenerate level structures. In the second part of the paper we develop some notions of quantum statistical mechanics in positive characteristic and we show that, in the case of Drinfeld modules of rank one, there is a natural time evolution on the associated noncommutative space, which is closely related to the positive characteristic L-functions introduced by Goss. The points of the usual moduli space of Drinfeld modules define KMS functionals for this time evolution. We also show that the scaling action on the dual system is induced by a Frobenius action, up to a Wick rotation to imaginary time. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "99549d037b403f78f273b3c64181fd21",
"text": "From social media has emerged continuous needs for automatic travel recommendations. Collaborative filtering (CF) is the most well-known approach. However, existing approaches generally suffer from various weaknesses. For example , sparsity can significantly degrade the performance of traditional CF. If a user only visits very few locations, accurate similar user identification becomes very challenging due to lack of sufficient information for effective inference. Moreover, existing recommendation approaches often ignore rich user information like textual descriptions of photos which can reflect users' travel preferences. The topic model (TM) method is an effective way to solve the “sparsity problem,” but is still far from satisfactory. In this paper, an author topic model-based collaborative filtering (ATCF) method is proposed to facilitate comprehensive points of interest (POIs) recommendations for social users. In our approach, user preference topics, such as cultural, cityscape, or landmark, are extracted from the geo-tag constrained textual description of photos via the author topic model instead of only from the geo-tags (GPS locations). Advantages and superior performance of our approach are demonstrated by extensive experiments on a large collection of data.",
"title": ""
},
{
"docid": "7e7cf44ce3c8982f61c6a93b89aa66e3",
"text": "This paper presents SceneCut, a novel approach to jointly discover previously unseen objects and non-object surfaces using a single RGB-D image. SceneCut's joint reasoning over scene semantics and geometry allows a robot to detect and segment object instances in complex scenes where modern deep learning-based methods either fail to separate object instances, or fail to detect objects that were not seen during training. SceneCut automatically decomposes a scene into meaningful regions which either represent objects or scene surfaces. The decomposition is qualified by an unified energy function over objectness and geometric fitting. We show how this energy function can be optimized efficiently by utilizing hierarchical segmentation trees. Moreover, we leverage a pre-trained convolutional oriented boundary network to predict accurate boundaries from images, which are used to construct high-quality region hierarchies. We evaluate SceneCut on several different indoor environments, and the results show that SceneCut significantly outperforms all the existing methods.",
"title": ""
},
{
"docid": "1f86ed06a01e7a37c5ce96d776b95511",
"text": "This paper presents a technique for incorporating terrain traversability data into a global path planning method for field mobile robots operating on rough natural terrain. The focus of this approach is on assessing the traversability characteristics of the global terrain using a multi-valued map representation of traversal dificulty, and using this information to compute a traversal cost function to ensure robot survivability. The traversal cost is then utilized by a global path planner to find an optimally safe path through the terrain. A graphical simulator for the terrain-basedpath planning is presented. The path planner is applied to a commercial Pioneer 2-AT robot andfield test results are provided.",
"title": ""
},
{
"docid": "0829cf1fb1654525627fdc61d1814196",
"text": "The selection of indexing terms for representing documents is a key decision that limits how effective subsequent retrieval can be. Often stemming algorithms are used to normalize surface forms, and thereby address the problem of not finding documents that contain words related to query terms through infectional or derivational morphology. However, rule-based stemmers are not available for every language and it is unclear which methods for coping with morphology are most effective. In this paper we investigate an assortment of techniques for representing text and compare these approaches using data sets in eighteen languages and five different writing systems.\n We find character n-gram tokenization to be highly effective. In half of the languages examined n-grams outperform unnormalized words by more than 25%; in highly infective languages relative improvements over 50% are obtained. In languages with less morphological richness the choice of tokenization is not as critical and rule-based stemming can be an attractive option, if available. We also conducted an experiment to uncover the source of n-gram power and a causal relationship between the morphological complexity of a language and n-gram effectiveness was demonstrated.",
"title": ""
},
{
"docid": "4a39ad1bac4327a70f077afa1d08c3f0",
"text": "Machine learning plays a role in many aspects of modern IR systems, and deep learning is applied in all of them. The fast pace of modern-day research has given rise to many approaches to many IR problems. The amount of information available can be overwhelming both for junior students and for experienced researchers looking for new research topics and directions. The aim of this full- day tutorial is to give a clear overview of current tried-and-trusted neural methods in IR and how they benefit IR.",
"title": ""
},
{
"docid": "68a31c4830f71e7e94b90227d69b5a79",
"text": "For many primary storage customers, storage must balance the requirements for large capacity, high performance, and low cost. A well studied technique is to place a solid state drive (SSD) cache in front of hard disk drive (HDD) storage, which can achieve much of the performance benefit of SSDs and the cost per gigabyte efficiency of HDDs. To further lower the cost of SSD caches and increase effective capacity, we propose the addition of data reduction techniques. Our cache architecture, called Nitro, has three main contributions: (1) an SSD cache design with adjustable deduplication, compression, and large replacement units, (2) an evaluation of the trade-offs between data reduction, RAM requirements, SSD writes (reduced up to 53%, which improves lifespan), and storage performance, and (3) acceleration of two prototype storage systems with an increase in IOPS (up to 120%) and reduction of read response time (up to 55%) compared to an SSD cache without Nitro. Additional benefits of Nitro include improved random read performance, faster snapshot restore, and reduced writes to SSDs.",
"title": ""
},
{
"docid": "4f5b26ab2d8bd68953d473727f6f5589",
"text": "OBJECTIVE\nThe study assessed the impact of mindfulness training on occupational safety of hospital health care workers.\n\n\nMETHODS\nThe study used a randomized waitlist-controlled trial design to test the effect of an 8-week mindfulness-based stress reduction (MBSR) course on self-reported health care worker safety outcomes, measured at baseline, postintervention, and 6 months later.\n\n\nRESULTS\nTwenty-three hospital health care workers participated in the study (11 in immediate intervention group; 12 in waitlist control group). The MBSR training decreased workplace cognitive failures (F [1, 20] = 7.44, P = 0.013, (Equation is included in full-text article.)) and increased safety compliance behaviors (F [1, 20] = 7.79, P = 0.011, (Equation is included in full-text article.)) among hospital health care workers. Effects were stable 6 months following the training. The MBSR intervention did not significantly affect participants' promotion of safety in the workplace (F [1, 20] = 0.40, P = 0.54, (Equation is included in full-text article.)).\n\n\nCONCLUSIONS\nMindfulness training may potentially decrease occupational injuries of health care workers.",
"title": ""
},
{
"docid": "e73060d189e9a4f4fd7b93e1cab22955",
"text": "We have recently shown that deep Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform feed forward deep neural networks (DNNs) as acoustic models for speech recognition. More recently, we have shown that the performance of sequence trained context dependent (CD) hidden Markov model (HMM) acoustic models using such LSTM RNNs can be equaled by sequence trained phone models initialized with connectionist temporal classification (CTC). In this paper, we present techniques that further improve performance of LSTM RNN acoustic models for large vocabulary speech recognition. We show that frame stacking and reduced frame rate lead to more accurate models and faster decoding. CD phone modeling leads to further improvements. We also present initial results for LSTM RNN models outputting words directly.",
"title": ""
},
{
"docid": "ddce6163a3fe4283a39fb341649c0ded",
"text": "Apoptosis induced by TNF-receptor I (TNFR1) is thought to proceed via recruitment of the adaptor FADD and caspase-8 to the receptor complex. TNFR1 signaling is also known to activate the transcription factor NF-kappa B and promote survival. The mechanism by which this decision between cell death and survival is arbitrated is not clear. We report that TNFR1-induced apoptosis involves two sequential signaling complexes. The initial plasma membrane bound complex (complex I) consists of TNFR1, the adaptor TRADD, the kinase RIP1, and TRAF2 and rapidly signals activation of NF-kappa B. In a second step, TRADD and RIP1 associate with FADD and caspase-8, forming a cytoplasmic complex (complex II). When NF-kappa B is activated by complex I, complex II harbors the caspase-8 inhibitor FLIP(L) and the cell survives. Thus, TNFR1-mediated-signal transduction includes a checkpoint, resulting in cell death (via complex II) in instances where the initial signal (via complex I, NF-kappa B) fails to be activated.",
"title": ""
},
{
"docid": "cbce30ed2bbdcd25fb708394dff1b7b6",
"text": "Current syntactic accounts of English resultatives are based on the assumption that result XPs are predicated of underlying direct objects. This assumption has helped to explain the presence of reflexive pronouns with some intransitive verbs but not others and the apparent lack of result XPs predicated of subjects of transitive verbs. We present problems for and counterexamples to some of the basic assumptions of the syntactic approach, which undermine its explanatory power. We develop an alternative account that appeals to principles governing the well-formedness of event structure and the event structure-to-syntax mapping. This account covers the data on intransitive verbs and predicts the distribution of subject-predicated result XPs with transitive verbs.*",
"title": ""
}
] |
scidocsrr
|
63d4fbac01a3a6bd026ce119f8fa3e5e
|
Disparity and occlusion estimation in multiocular systems and their coding for the communication of multiview image sequences
|
[
{
"docid": "bbf581230ec60c2402651d51e3a37211",
"text": "The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.",
"title": ""
}
] |
[
{
"docid": "db79c4fc00f18c3d7822c9f79d1a4a83",
"text": "We propose a new pipeline for optical flow computation, based on Deep Learning techniques. We suggest using a Siamese CNN to independently, and in parallel, compute the descriptors of both images. The learned descriptors are then compared efficiently using the L2 norm and do not require network processing of patch pairs. The success of the method is based on an innovative loss function that computes higher moments of the loss distributions for each training batch. Combined with an Approximate Nearest Neighbor patch matching method and a flow interpolation technique, state of the art performance is obtained on the most challenging and competitive optical flow benchmarks.",
"title": ""
},
{
"docid": "135ceae69b9953cf8fe989dcf8d3d0da",
"text": "Recent advances in development of Wireless Communication in Vehicular Adhoc Network (VANET) has provided emerging platform for industrialists and researchers. Vehicular adhoc networks are multihop networks with no fixed infrastructure. It comprises of moving vehicles communicating with each other. One of the main challenge in VANET is to route the data efficiently from source to destination. Designing an efficient routing protocol for VANET is tedious task. Also because of wireless medium it is vulnerable to several attacks. Since attacks mislead the network operations, security is mandatory for successful deployment of such technology. This survey paper gives brief overview of different routing protocols. Also attempt has been made to identify major security issues and challenges associated with different routing protocols. .",
"title": ""
},
{
"docid": "49d533bf41f18bc96c404bb9a8bd12ae",
"text": "A back-cavity shielded bow-tie antenna system working at 900MHz center frequency for ground-coupled GPR application is investigated numerically and experimentally in this paper. Bow-tie geometrical structure is modified for a compact design and back-cavity assembly. A layer of absorber is employed to overcome the back reflection by omni-directional radiation pattern of a bow-tie antenna in H-plane, thus increasing the SNR and improve the isolation between T and R antennas as well. The designed antenna system is applied to a prototype GPR system. Tested data shows that the back-cavity shielded antenna works satisfactorily in the 900MHz GPR system.",
"title": ""
},
{
"docid": "9ca71bbeb4643a6a347050002f1317f5",
"text": "In modern society, we are increasingly disconnected from natural light/dark cycles and beset by round-the-clock exposure to artificial light. Light has powerful effects on physical and mental health, in part via the circadian system, and thus the timing of light exposure dictates whether it is helpful or harmful. In their compelling paper, Obayashi et al. (Am J Epidemiol. 2018;187(3):427-434.) offer evidence that light at night can prospectively predict an elevated incidence of depressive symptoms in older adults. Strengths of the study include the longitudinal design and direct, objective assessment of light levels, as well as accounting for multiple plausible confounders during analyses. Follow-up studies should address the study's limitations, including reliance on a global self-report of sleep quality and a 2-night assessment of light exposure that may not reliably represent typical light exposure. In addition, experimental studies including physiological circadian measures will be necessary to determine whether the light effects on depression are mediated through the circadian system or are so-called \"direct\" effects of light. In any case, these exciting findings could inform novel approaches to preventing depressive disorders in older adults.",
"title": ""
},
{
"docid": "ea55fffd5ed53588ba874780d9c5083a",
"text": "Representation learning is a central challenge across a range of machine learning areas. In reinforcement learning, effective and functional representations have the potential to tremendously accelerate learning progress and solve more challenging problems. Most prior work on representation learning has focused on generative approaches, learning representations that capture all underlying factors of variation in the observation space in a more disentangled or well-ordered manner. In this paper, we instead aim to learn functionally salient representations: representations that are not necessarily complete in terms of capturing all factors of variation in the observation space, but rather aim to capture those factors of variation that are important for decision making – that are “actionable.” These representations are aware of the dynamics of the environment, and capture only the elements of the observation that are necessary for decision making rather than all factors of variation, without explicit reconstruction of the observation. We show how these representations can be useful to improve exploration for sparse reward problems, to enable long horizon hierarchical reinforcement learning, and as a state representation for learning policies for downstream tasks. We evaluate our method on a number of simulated environments, and compare it to prior methods for representation learning, exploration, and hierarchical reinforcement learning.",
"title": ""
},
{
"docid": "794ad922f93b85e2195b3c85665a8202",
"text": "The paper shows how to create a probabilistic graph for WordNet. A node is created for every word and phrase in WordNet. An edge between two nodes is labeled with the probability that a user that is interested in the source concept will also be interested in the destination concept. For example, an edge with weight 0.3 between \"canine\" and \"dog\" indicates that there is a 30% probability that a user who searches for \"canine\" will be interested in results that contain the word \"dog\". We refer to the graph as probabilistic because we enforce the constraint that the sum of the weights of all the edges that go out of a node add up to one. Structural (e.g., the word \"canine\" is a hypernym (i.e., kind of) of the word \"dog\") and textual (e.g., the word \"canine\" appears in the textual definition of the word \"dog\") data from WordNet is used to create a Markov logic network, that is, a set of first order formulas with probabilities. The Markov logic network is then used to compute the weights of the edges in the probabilistic graph. We experimentally validate the quality of the data in the probabilistic graph on two independent benchmarks: Miller and Charles and WordSimilarity-353.",
"title": ""
},
{
"docid": "976ae17105f83e45a177c81441da3afa",
"text": "In the Google Play store, an introduction page is associated with every mobile application (app) for users to acquire its details, including screenshots, description, reviews, etc. However, it remains a challenge to identify what items influence users most when downloading an app. To explore users’ perspective, we conduct a survey to inquire about this question. The results of survey suggest that the participants pay most attention to the app description which gives users a quick overview of the app. Although there exist some guidelines about how to write a good app description to attract more downloads, it is hard to define a high quality app description. Meanwhile, there is no tool to evaluate the quality of app description. In this paper, we employ the method of crowdsourcing to extract the attributes that affect the app descriptions’ quality. First, we download some app descriptions from Google Play, then invite some participants to rate their quality with the score from one (very poor) to five (very good). The participants are also requested to explain every score’s reasons. By analyzing the reasons, we extract the attributes that the participants consider important during evaluating the quality of app descriptions. Finally, we train the supervised learning models on a sample of 100 app descriptions. In our experiments, the support vector machine model obtains up to 62% accuracy. In addition, we find that the permission, the number of paragraphs and the average number of words in one feature play key roles in defining a good app description.",
"title": ""
},
{
"docid": "026c1338e3c487d69523d0f0990451a4",
"text": "This article reports the psychometric evaluation of the Pornography Consumption Inventory (PCI), which was developed to assess motivations for pornography use among hypersexual men. Initial factor structure and item analysis were conducted in a sample of men (N = 105) seeking to reduce their pornography consumption (Study 1), yielding a 4-factor solution. In a second sample of treatment-seeking hypersexual men (N = 107), the authors further investigated the properties of the PCI using confirmatory factor analytic procedures, reliability indices, and explored PCI associations with several other constructs to establish convergent and discriminant validity. These studies demonstrate psychometric evidence for the PCI items that measure tendencies of hypersexual men to use pornography (a) for sexual pleasure; (b) to escape, cope, or avoid uncomfortable emotional experiences or stress; (c) to satisfy sexual curiosity; and (d) to satisfy desires for excitement, novelty, and variety.",
"title": ""
},
{
"docid": "55ec472aaff49b328d2aaf0a001fd1f6",
"text": "The threat of hardware reverse engineering is a growing concern for a large number of applications. A main defense strategy against reverse engineering is hardware obfuscation. In this paper, we investigate physical obfuscation techniques, which perform alterations of circuit elements that are difficult or impossible for an adversary to observe. The examples of such stealthy manipulations are changes in the doping concentrations or dielectric manipulations. An attacker will, thus, extract a netlist, which does not correspond to the logic function of the device-under-attack. This approach of camouflaging has garnered recent attention in the literature. In this paper, we expound on this promising direction to conduct a systematic end-to-end study of the VLSI design process to find multiple ways to obfuscate a circuit for hardware security. This paper makes three major contributions. First, we provide a categorization of the available physical obfuscation techniques as it pertains to various design stages. There is a large and multidimensional design space for introducing obfuscated elements and mechanisms, and the proposed taxonomy is helpful for a systematic treatment. Second, we provide a review of the methods that have been proposed or in use. Third, we present recent and new device and logic-level techniques for design obfuscation. For each technique considered, we discuss feasibility of the approach and assess likelihood of its detection. Then we turn our focus to open research questions, and conclude with suggestions for future research directions.",
"title": ""
},
{
"docid": "b11c59f3b49c064b9e866fddd328d9e6",
"text": "A new class of compact in-line filters with pseudoelliptic responses is presented in this paper. The proposed filters employ a new type of mixed-mode resonator. Such a resonator consists of a cavity loaded with a suspended high permittivity dielectric puck, so that both cavity TE101 mode and dielectric TE01δ mode are exploited within the same volume. This structure realizes the transverse doublet topology and it is therefore capable of generating a transmission zero (TZ) that can be either located above or below the passband. Multiple mixedmode resonators can be used as basic building blocks to obtain higher order filters by cascading them through nonresonating nodes. These filters are capable of implementing TZs that are very close to the passband edges, thus realizing an extreme closein rejection. As a result of the dielectric loading, the proposed solution leads to a very compact structure with improved temperature stability. To validate the proposed class of filters, a second-order filter with 2.0% fractional bandwidth (FBW) and a fourth-order filter with 2.5% FBW have been designed and manufactured.",
"title": ""
},
{
"docid": "6dce88afec3456be343c6a477350aa49",
"text": "In order to capture rich language phenomena, neural machine translation models have to use a large vocabulary size, which requires high computing time and large memory usage. In this paper, we alleviate this issue by introducing a sentence-level or batch-level vocabulary, which is only a very small sub-set of the full output vocabulary. For each sentence or batch, we only predict the target words in its sentencelevel or batch-level vocabulary. Thus, we reduce both the computing time and the memory usage. Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a wordto-word translation model or a bilingual phrase library learned from a traditional machine translation model. Experimental results on the large-scale English-toFrench task show that our method achieves better translation performance by 1 BLEU point over the large vocabulary neural machine translation system of Jean et al. (2015).",
"title": ""
},
{
"docid": "122a27336317372a0d84ee353bb94a4b",
"text": "Recently, many advanced machine learning approaches have been proposed for coreference resolution; however, all of the discriminatively-trained models reason over mentions rather than entities. That is, they do not explicitly contain variables indicating the “canonical” values for each attribute of an entity (e.g., name, venue, title, etc.). This canonicalization step is typically implemented as a post-processing routine to coreference resolution prior to adding the extracted entity to a database. In this paper, we propose a discriminatively-trained model that jointly performs coreference resolution and canonicalization, enabling features over hypothesized entities. We validate our approach on two different coreference problems: newswire anaphora resolution and research paper citation matching, demonstrating improvements in both tasks and achieving an error reduction of up to 62% when compared to a method that reasons about mentions only.",
"title": ""
},
{
"docid": "36721d43d9aa484803af28a4a720ae21",
"text": "The recognition that nutrients have the ability to interact and modulate molecular mechanisms underlying an organism's physiological functions has prompted a revolution in the field of nutrition. Performing population-scaled epidemiological studies in the absence of genetic knowledge may result in erroneous scientific conclusions and misinformed nutritional recommendations. To circumvent such issues and more comprehensively probe the relationship between genes and diet, the field of nutrition has begun to capitalize on both the technologies and supporting analytical software brought forth in the post-genomic era. The creation of nutrigenomics and nutrigenetics, two fields with distinct approaches to elucidate the interaction between diet and genes but with a common ultimate goal to optimize health through the personalization of diet, provide powerful approaches to unravel the complex relationship between nutritional molecules, genetic polymorphisms, and the biological system as a whole. Reluctance to embrace these new fields exists primarily due to the fear that producing overwhelming quantities of biological data within the confines of a single study will submerge the original query; however, the current review aims to position nutrigenomics and nutrigenetics as the emerging faces of nutrition that, when considered with more classical approaches, will provide the necessary stepping stones to achieve the ambitious goal of optimizing an individual's health via nutritional intervention.",
"title": ""
},
{
"docid": "06a1d90991c5a9039c6758a66205e446",
"text": "In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression. We hypothesize that syntactic information helps in making such models more robust across domains. We propose two major changes to the model: using explicit syntactic features and introducing syntactic constraints through Integer Linear Programming (ILP). Our evaluation shows that the proposed model works better than the original model as well as a traditional non-neural-network-based model in a cross-domain setting.",
"title": ""
},
{
"docid": "f5a6dcd51ecf0dfbd1719e1eae8cbf71",
"text": "In this letter, the design of a compact high-power waveguide low-pass filter with low insertion loss, all-higher order mode suppression, and stopband rejection up to the third harmonic, intended for Ka-band satellite applications, is presented. The method is based on step-shaped bandstop elements separated by very short (ideally of zero length) waveguide sections easily manufactured by low-cost computer-controlled milling. Matching is achieved by short input/output networks based on stubs whose heights are optimized following classical approaches. The novel filter presents a reduction in insertion loss and a remarkable increase in the high-power handling capability when compared to the classical waffle-iron filter and alternative solutions previously proposed, while the out-of-band frequency behavior remains unaltered.",
"title": ""
},
{
"docid": "2b595cab271cac15ea165e46459d6923",
"text": "Autonomous Mobility On Demand (MOD) systems can utilize fleet management strategies in order to provide a high customer quality of service (QoS). Previous works on autonomous MOD systems have developed methods for rebalancing single capacity vehicles, where QoS is maintained through large fleet sizing. This work focuses on MOD systems utilizing a small number of vehicles, such as those found on a campus, where additional vehicles cannot be introduced as demand for rides increases. A predictive positioning method is presented for improving customer QoS by identifying key locations to position the fleet in order to minimize expected customer wait time. Ridesharing is introduced as a means for improving customer QoS as arrival rates increase. However, with ridesharing perceived QoS is dependent on an often unknown customer preference. To address this challenge, a customer ratings model, which learns customer preference from a 5-star rating, is developed and incorporated directly into a ridesharing algorithm. The predictive positioning and ridesharing methods are applied to simulation of a real-world campus MOD system. A combined predictive positioning and ridesharing approach is shown to reduce customer service times by up to 29%. and the customer ratings model is shown to provide the best overall MOD fleet management performance over a range of customer preferences.",
"title": ""
},
{
"docid": "cdb7380ca1a4b5a8059e3e4adc6b7ea2",
"text": "In this paper, tunable microstrip bandpass filters with two adjustable transmission poles and compensable coupling are proposed. The fundamental structure is based on a half-wavelength (λ/2) resonator with a center-tapped open-stub. Microwave varactors placed at various internal nodes separately adjust the filter's center frequency and bandwidth over a wide tuning range. The constant absolute bandwidth is achieved at different center frequencies by maintaining the distance between the in-band transmission poles. Meanwhile, the coupling strength could be compensable by tuning varactors that are side and embedding loaded in the parallel coupled microstrip lines (PCMLs). As a demonstrator, a second-order filter with seven tuning varactors is implemented and verified. A frequency range of 0.58-0.91 GHz with a 1-dB bandwidth tuning from 115 to 315 MHz (i.e., 12.6%-54.3% fractional bandwidth) is demonstrated. Specifically, the return loss of passbands with different operating center frequencies can be achieved with same level, i.e., about 13.1 and 11.6 dB for narrow and wide passband responses, respectively. To further verify the etch-tolerance characteristics of the proposed prototype filter, another second-order filter with nine tuning varactors is proposed and fabricated. The measured results exhibit that the tunable fitler with the embedded varactor-loaded PCML has less sensitivity to fabrication tolerances. Meanwhile, the passband return loss can be achieved with same level of 20 dB for narrow and wide passband responses, respectively.",
"title": ""
},
{
"docid": "249367e508f61804642ae37e27d70901",
"text": "For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.4%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.",
"title": ""
},
{
"docid": "80383246c35226231b4f136c6cc0019b",
"text": "How to automatically monitor wide critical open areas is a challenge to be addressed. Recent computer vision algorithms can be exploited to avoid the deployment of a large amount of expensive sensors. In this work, we propose our object tracking system which, combined with our recently developed anomaly detection system. can provide intelligence and protection for critical areas. In this work. we report two case studies: an international pier and a city parking lot. We acquire sequences to evaluate the effectiveness of the approach in challenging conditions. We report quantitative results for object counting, detection, parking analysis, and anomaly detection. Moreover, we report state-of-the-art results for statistical anomaly detection on a public dataset.",
"title": ""
},
{
"docid": "5031c9b3dfbe2bf2a07a4f1414f594e0",
"text": "BACKGROUND\nWe assessed the effects of a three-year national-level, ministry-led health information system (HIS) data quality intervention and identified associated health facility factors.\n\n\nMETHODS\nMonthly summary HIS data concordance between a gold standard data quality audit and routine HIS data was assessed in 26 health facilities in Sofala Province, Mozambique across four indicators (outpatient consults, institutional births, first antenatal care visits, and third dose of diphtheria, pertussis, and tetanus vaccination) and five levels of health system data aggregation (daily facility paper registers, monthly paper facility reports, monthly paper district reports, monthly electronic district reports, and monthly electronic provincial reports) through retrospective yearly audits conducted July-August 2010-2013. We used mixed-effects linear models to quantify changes in data quality over time and associated health system determinants.\n\n\nRESULTS\nMedian concordance increased from 56.3% during the baseline period (2009-2010) to 87.5% during 2012-2013. Concordance improved by 1.0% (confidence interval [CI]: 0.60, 1.5) per month during the intervention period of 2010-2011 and 1.6% (CI: 0.89, 2.2) per month from 2011-2012. No significant improvements were observed from 2009-2010 (during baseline period) or 2012-2013. Facilities with more technical staff (aβ: 0.71; CI: 0.14, 1.3), more first antenatal care visits (aβ: 3.3; CI: 0.43, 6.2), and fewer clinic beds (aβ: -0.94; CI: -1.7, -0.20) showed more improvements. Compared to facilities with no stock-outs, facilities with five essential drugs stocked out had 51.7% (CI: -64.8 -38.6) lower data concordance.\n\n\nCONCLUSIONS\nA data quality intervention was associated with significant improvements in health information system data concordance across public-sector health facilities in rural and urban Mozambique. Concordance was higher at those facilities with more human resources for health and was associated with fewer clinic-level stock-outs of essential medicines. Increased investments should be made in data audit and feedback activities alongside targeted efforts to improve HIS data in low- and middle-income countries.",
"title": ""
}
] |
scidocsrr
|
568c7e5bc4f47c8bf8a0414f32f4bb13
|
Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models
|
[
{
"docid": "f9823fc9ac0750cc247cfdbf0064c8b5",
"text": "Scene segmentation is a challenging task as it need label every pixel in the image. It is crucial to exploit discriminative context and aggregate multi-scale features to achieve better segmentation. In this paper, we first propose a novel context contrasted local feature that not only leverages the informative context but also spotlights the local information in contrast to the context. The proposed context contrasted local feature greatly improves the parsing performance, especially for inconspicuous objects and background stuff. Furthermore, we propose a scheme of gated sum to selectively aggregate multi-scale features for each spatial position. The gates in this scheme control the information flow of different scale features. Their values are generated from the testing image by the proposed network learnt from the training data so that they are adaptive not only to the training data, but also to the specific testing image. Without bells and whistles, the proposed approach achieves the state-of-the-arts consistently on the three popular scene segmentation datasets, Pascal Context, SUN-RGBD and COCO Stuff.",
"title": ""
},
{
"docid": "ba29af46fd410829c450eed631aa9280",
"text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.",
"title": ""
},
{
"docid": "4301af5b0c7910480af37f01847fb1fe",
"text": "Cross-modal retrieval is a very hot research topic that is imperative to many applications involving multi-modal data. Discovering an appropriate representation for multi-modal data and learning a ranking function are essential to boost the cross-media retrieval. Motivated by the assumption that a compositional cross-modal semantic representation (pairs of images and text) is more attractive for cross-modal ranking, this paper exploits the existing image-text databases to optimize a ranking function for cross-modal retrieval, called deep compositional cross-modal learning to rank (C2MLR). In this paper, C2MLR considers learning a multi-modal embedding from the perspective of optimizing a pairwise ranking problem while enhancing both local alignment and global alignment. In particular, the local alignment (i.e., the alignment of visual objects and textual words) and the global alignment (i.e., the image-level and sentence-level alignment) are collaboratively utilized to learn the multi-modal embedding common space in a max-margin learning to rank manner. The experiments demonstrate the superiority of our proposed C2MLR due to its nature of multi-modal compositional embedding.",
"title": ""
}
] |
[
{
"docid": "c122a50d90e9f4834f36a19ba827fa9f",
"text": "Cancers are able to grow by subverting immune suppressive pathways, to prevent the malignant cells as being recognized as dangerous or foreign. This mechanism prevents the cancer from being eliminated by the immune system and allows disease to progress from a very early stage to a lethal state. Immunotherapies are newly developing interventions that modify the patient's immune system to fight cancer, by either directly stimulating rejection-type processes or blocking suppressive pathways. Extracellular adenosine generated by the ectonucleotidases CD39 and CD73 is a newly recognized \"immune checkpoint mediator\" that interferes with anti-tumor immune responses. In this review, we focus on CD39 and CD73 ectoenzymes and encompass aspects of the biochemistry of these molecules as well as detailing the distribution and function on immune cells. Effects of CD39 and CD73 inhibition in preclinical and clinical studies are discussed. Finally, we provide insights into potential clinical application of adenosinergic and other purinergic-targeting therapies and forecast how these might develop in combination with other anti-cancer modalities.",
"title": ""
},
{
"docid": "d18ed4c40450454d6f517c808da7115a",
"text": "Polythelia is a rare congenital malformation that occurs in 1-2% of the population. Intra-areolar polythelia is the presence of one or more supernumerary nipples located within the areola. This is extremely rare. This article presents 3 cases of intra-areolar polythelia treated at our Department. These cases did not present other associated malformation. Surgical correction was performed for psychological and cosmetic reasons using advancement flaps. The aesthetic and functional results were satisfactory.",
"title": ""
},
{
"docid": "146c58e49221a9e8f8dbcdc149737924",
"text": "Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.",
"title": ""
},
{
"docid": "25739e04a42f7309127596846d9eefa3",
"text": "We consider a new formulation of abduction. Our formulation differs from the existing approaches in that it does not cast the “plausibility” of explanations in terms of either syntactic minimality or an explicitly given prior distribution. Instead, “plausibility,” along with the rules of the domain, is learned from concrete examples (settings of attributes). Our version of abduction thus falls in the “learning to reason” framework of Khardon and Roth. Such approaches enable us to capture a natural notion of “plausibility” in a domain while avoiding the problem of specifying an explicit representation of what is “plausible,” a task that humans find extremely difficult. In this work, we specifically consider the question of which syntactic classes of formulas have efficient algorithms for abduction. It turns out that while the representation of the query is irrelevant to the computational complexity of our problem, the representation of the explanation critically affects its tractability. We find that the class of k-DNF explanations can be found in polynomial time for any fixed k; but, we also find evidence that even very weak versions of our abduction task are intractable for the usual class of conjunctive explanations. This evidence is provided by a connection to the usual, inductive PAC-learning model proposed by Valiant. We also briefly consider an exception-tolerant variant of abduction. We observe that it is possible for polynomial-time algorithms to tolerate a few adversarially chosen exceptions, again for the class of kDNF explanations. All of the algorithms we study are particularly simple, and indeed are variants of a rule proposed by Mill.",
"title": ""
},
{
"docid": "7858a5855b7a8420f74bb3af064c31ed",
"text": "Current technologies for searching scientific litearture do not support answering many queries that could significantly improve the day-to-day activities of a researcher. For instance, a Machine Translation (MT) researcher would like to answer questions such as: • Which are the best published results reported on the NIST-09 Chinese dataset? • What are the most important methods for speeding up phrase-based decoding? • Are there papers showing that a neural translation model is better than a non-neural? Current methods cannot yet infer the main elements of experiments reported in papers; there is no consenus on what the elements and the relations between them should be.",
"title": ""
},
{
"docid": "460fd722b6dffdb78ce8696f801cf02d",
"text": "Clustered regularly interspaced short palindromic repeats (CRISPR) are a distinctive feature of the genomes of most Bacteria and Archaea and are thought to be involved in resistance to bacteriophages. We found that, after viral challenge, bacteria integrated new spacers derived from phage genomic sequences. Removal or addition of particular spacers modified the phage-resistance phenotype of the cell. Thus, CRISPR, together with associated cas genes, provided resistance against phages, and resistance specificity is determined by spacer-phage sequence similarity.",
"title": ""
},
{
"docid": "98cfdc1fb3c957283eb62470376edf82",
"text": "In this paper we present the MDA framework (standing for Mechanics, Dynamics, and Aesthetics), developed and taught as part of the Game Design and Tuning Workshop at the Game Developers Conference, San Jose 2001-2004. MDA is a formal approach to understanding games one which attempts to bridge the gap between game design and development, game criticism, and technical game research. We believe this methodology will clarify and strengthen the iterative processes of developers, scholars and researchers alike, making it easier for all parties to decompose, study and design a broad class of game designs and game artifacts.",
"title": ""
},
{
"docid": "c2fc4e65c484486f5612f4006b6df102",
"text": "Although flat item category structure where categories are independent in a same level has been well studied to enhance recommendation performance, in many real applications, item category is often organized in hierarchies to reflect the inherent correlations among categories. In this paper, we propose a novel matrix factorization model by exploiting category hierarchy from the perspectives of users and items for effective recommendation. Specifically, a user (an item) can be influenced (characterized) by her preferred categories (the categories it belongs to) in the hierarchy. We incorporate how different categories in the hierarchy co-influence a user and an item. Empirical results show the superiority of our approach against other counterparts.",
"title": ""
},
{
"docid": "18153ed3c2141500e0f245e3846df173",
"text": "This paper presents the modeling and simulation of a 25 kV 50 Hz AC traction system using power system block set (PSB) / SIMULINK software package. The three-phase system with substations, track section with rectifier-fed DC locomotives and a detailed traction load are included in the model. The model has been used to study the effect of loading and fault conditions in 25 kV AC traction. The relay characteristic proposed is a combination of two quadrilaterals in the X-R plane. A brief summary of the hardware set-up used to implement and test the relay characteristic using a Texas Instruments TMS320C50 digital signal processor (DSP) has also been presented.",
"title": ""
},
{
"docid": "a1d9c897f926fa4cc45ebc6209deb6bc",
"text": "This paper addresses the relationship between the ego, id, and internal objects. While ego psychology views the ego as autonomous of the drives, a less well-known alternative position views the ego as constituted by the drives. Based on Freud's ego-instinct account, this position has developed into a school of thought which postulates that the drives act as knowers. Given that there are multiple drives, this position proposes that personality is constituted by multiple knowers. Following on from Freud, the ego is viewed as a composite sub-set of the instinctual drives (ego-drives), whereas those drives cut off from expression form the id. The nature of the \"self\" is developed in terms of identification and the possibility of multiple personalities is also established. This account is then extended to object-relations and the explanatory value of the ego-drive account is discussed in terms of the addressing the nature of ego-structures and the dynamic nature of internal objects. Finally, the impact of psychological conflict and the significance of repression for understanding the nature of splits within the psyche are also discussed.",
"title": ""
},
{
"docid": "63ae128637d0855ca1b09793314aad03",
"text": "Gray platelet syndrome (GPS) is a predominantly recessive platelet disorder that is characterized by mild thrombocytopenia with large platelets and a paucity of α-granules; these abnormalities cause mostly moderate but in rare cases severe bleeding. We sequenced the exomes of four unrelated individuals and identified NBEAL2 as the causative gene; it has no previously known function but is a member of a gene family that is involved in granule development. Silencing of nbeal2 in zebrafish abrogated thrombocyte formation.",
"title": ""
},
{
"docid": "6eff790c76e7eb7016eef6d306a0dde0",
"text": "To cite: Rozenblum R, Bates DW. BMJ Qual Saf 2013;22:183–186. Patients are central to healthcare delivery, yet all too often their perspectives and input have not been considered by providers. 2 This is beginning to change rapidly and is having a major impact across a range of dimensions. Patients are becoming more engaged in their care and patient-centred healthcare has emerged as a major domain of quality. At the same time, social media in particular and the internet more broadly are widely recognised as having produced huge effects across societies. For example, few would have predicted the Arab Spring, yet it was clearly enabled by media such as Facebook and Twitter. Now these technologies are beginning to pervade the healthcare space, just as they have so many others. But what will their effects be? These three domains—patient-centred healthcare, social media and the internet— are beginning to come together, with powerful and unpredictable consequences. We believe that they have the potential to create a major shift in how patients and healthcare organisations connect, in effect, the ‘perfect storm’, a phrase that has been used to describe a situation in which a rare combination of circumstances result in an event of unusual magnitude creating the potential for non-linear change. Historically, patients have paid relatively little attention to quality, safety and the experiences large groups of other patients have had, and have made choices about where to get healthcare based largely on factors like reputation, the recommendations of a friend or proximity. Part of the reason for this was that information about quality or the opinions of others about their care was hard to access before the internet. Today, patients appear to be becoming more engaged with their care in general, and one of the many results is that they are increasingly using the internet to share and rate their experiences of health care. They are also using the internet to connect with others having similar illnesses, to share experiences, and beginning to manage their illnesses by leveraging these technologies. While it is not yet clear what impact patients’ use of the internet and social media will have on healthcare, they will definitely have a major effect. Healthcare organisations have generally been laggards in this space—they need to start thinking about how they will use the internet in a variety of ways, with specific examples being leveraging the growing number of patients that are using the internet to describe their experiences of healthcare and how they can incorporate patient’s feedback via the internet into the organisational quality improvement process.",
"title": ""
},
{
"docid": "8d5759855079e2ddaab2e920b93ca2a3",
"text": "In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the human-as-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable.",
"title": ""
},
{
"docid": "164fca8833981d037f861aada01d5f7f",
"text": "Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic subsampling, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially O(n) memory and O(n √ n) time. An extensive experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit parallel/distributed architectures.",
"title": ""
},
{
"docid": "c08e33f44b8e27529385b1557906dc81",
"text": "A key challenge in wireless cognitive radio networks is to maximize the total throughput also known as the sum rates of all the users while avoiding the interference of unlicensed band secondary users from overwhelming the licensed band primary users. We study the weighted sum rate maximization problem with both power budget and interference temperature constraints in a cognitive radio network. This problem is nonconvex and generally hard to solve. We propose a reformulation-relaxation technique that leverages nonnegative matrix theory to first obtain a relaxed problem with nonnegative matrix spectral radius constraints. A useful upper bound on the sum rates is then obtained by solving a convex optimization problem over a closed bounded convex set. It also enables the sum-rate optimality to be quantified analytically through the spectrum of specially-crafted nonnegative matrices. Furthermore, we obtain polynomial-time verifiable sufficient conditions that can identify polynomial-time solvable problem instances, which can be solved by a fixed-point algorithm. As a by-product, an interesting optimality equivalence between the nonconvex sum rate problem and the convex max-min rate problem is established. In the general case, we propose a global optimization algorithm by utilizing our convex relaxation and branch-and-bound to compute an ε-optimal solution. Our technique exploits the nonnegativity of the physical quantities, e.g., channel parameters, powers and rates, that enables key tools in nonnegative matrix theory such as the (linear and nonlinear) Perron-Frobenius theorem, quasi-invertibility, Friedland-Karlin inequalities to be employed naturally. Numerical results are presented to show that our proposed algorithms are theoretically sound and have relatively fast convergence time even for large-scale problems",
"title": ""
},
{
"docid": "7c13ebe2897fc4870a152159cda62025",
"text": "Tuberculosis (TB) remains a major health threat, killing nearly 2 million individuals around this globe, annually. The only vaccine, developed almost a century ago, provides limited protection only during childhood. After decades without the introduction of new antibiotics, several candidates are currently undergoing clinical investigation. Curing TB requires prolonged combination of chemotherapy with several drugs. Moreover, monitoring the success of therapy is questionable owing to the lack of reliable biomarkers. To substantially improve the situation, a detailed understanding of the cross-talk between human host and the pathogen Mycobacterium tuberculosis (Mtb) is vital. Principally, the enormous success of Mtb is based on three capacities: first, reprogramming of macrophages after primary infection/phagocytosis to prevent its own destruction; second, initiating the formation of well-organized granulomas, comprising different immune cells to create a confined environment for the host-pathogen standoff; third, the capability to shut down its own central metabolism, terminate replication, and thereby transit into a stage of dormancy rendering itself extremely resistant to host defense and drug treatment. Here, we review the molecular mechanisms underlying these processes, draw conclusions in a working model of mycobacterial dormancy, and highlight gaps in our understanding to be addressed in future research.",
"title": ""
},
{
"docid": "5f8f9a407c42a6a3c6c269c22d36f684",
"text": "This paper proposes a coarse-fine dual-loop architecture for the digital low drop-out (LDO) regulators with fast transient response and more than 200-mA load capacity. In the proposed scheme, the output voltage is coregulated by two loops, namely, the coarse loop and the fine loop. The coarse loop adopts a fast current-mirror flash analog to digital converter and supplies high output current to enhance the transient performance, while the fine loop delivers low output current and helps reduce the voltage ripples and improve the regulation accuracies. Besides, a digital controller is implemented to prevent contentions between the two loops. Fabricated in a 28-nm Samsung CMOS process, the proposed digital LDO achieves maximum load up to 200 mA when the input and the output voltages are 1.1 and 0.9 V, respectively, with a chip area of 0.021 mm2. The measured output voltage drop of around 120 mV is observed for a load step of 180 mA.",
"title": ""
},
{
"docid": "c71cfc228764fc96e7e747e119445939",
"text": "This review discusses and summarizes the recent developments and advances in the use of biodegradable materials for bone repair purposes. The choice between using degradable and non-degradable devices for orthopedic and maxillofacial applications must be carefully weighed. Traditional biodegradable devices for osteosynthesis have been successful in low or mild load bearing applications. However, continuing research and recent developments in the field of material science has resulted in development of biomaterials with improved strength and mechanical properties. For this purpose, biodegradable materials, including polymers, ceramics and magnesium alloys have attracted much attention for osteologic repair and applications. The next generation of biodegradable materials would benefit from recent knowledge gained regarding cell material interactions, with better control of interfacing between the material and the surrounding bone tissue. The next generations of biodegradable materials for bone repair and regeneration applications require better control of interfacing between the material and the surrounding bone tissue. Also, the mechanical properties and degradation/resorption profiles of these materials require further improvement to broaden their use and achieve better clinical results.",
"title": ""
},
{
"docid": "4142b1fc9e37ffadc6950105c3d99749",
"text": "Just-noticeable distortion (JND), which refers to the maximum distortion that the human visual system (HVS) cannot perceive, plays an important role in perceptual image and video processing. In comparison with JND estimation for images, estimation of the JND profile for video needs to take into account the temporal HVS properties in addition to the spatial properties. In this paper, we develop a spatio-temporal model estimating JND in the discrete cosine transform domain. The proposed model incorporates the spatio-temporal contrast sensitivity function, the influence of eye movements, luminance adaptation, and contrast masking to be more consistent with human perception. It is capable of yielding JNDs for both still images and video with significant motion. The experiments conducted in this study have demonstrated that the JND values estimated for video sequences with moving objects by the model are in line with the HVS perception. The accurate JND estimation of the video towards the actual visibility bounds can be translated into resource savings (e.g., for bandwidth/storage or computation) and performance improvement in video coding and other visual processing tasks (such as perceptual quality evaluation, visual signal restoration/enhancement, watermarking, authentication, and error protection)",
"title": ""
},
{
"docid": "d9bd23208ab6eb8688afea408a4c9eba",
"text": "A novel ultra-wideband (UWB) bandpass filter with 5 to 6 GHz rejection band is proposed. The multiple coupled line structure is incorporated with multiple-mode resonator (MMR) to provide wide transmission band and enhance out-of band performance. To inhibit the signals ranged from 5- to 6-GHz, four stepped-impedance open stubs are implemented on the MMR without increasing the size of the proposed filter. The design of the proposed UWB filter has two transmission bands. The first passband from 2.8 GHz to 5 GHz has less than 2 dB insertion loss and greater than 18 dB return loss. The second passband within 6 GHz and 10.6 GHz has less than 1.5 dB insertion loss and greater than 15 dB return loss. The rejection at 5.5 GHz is better than 50 dB. This filter can be integrated in UWB radio systems and efficiently enhance the interference immunity from WLAN.",
"title": ""
}
] |
scidocsrr
|
fec5a9f5e8e9adf4083b558236256656
|
Green-lighting Movie Scripts : Revenue Forecasting and Risk Management
|
[
{
"docid": "f66854fd8e3f29ae8de75fc83d6e41f5",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] |
[
{
"docid": "0de0093ab3720901d4704bfeb7be4093",
"text": "Big Data analytics can revolutionize the healthcare industry. It can improve operational efficiencies, help predict and plan responses to disease epidemics, improve the quality of monitoring of clinical trials, and optimize healthcare spending at all levels from patients to hospital systems to governments. This paper provides an overview of Big Data, applicability of it in healthcare, some of the work in progress and a future outlook on how Big Data analytics can improve overall quality in healthcare systems.",
"title": ""
},
{
"docid": "c68ec0f721c8d8bfa27a415ba10708cf",
"text": "Textures are widely used in modern computer graphics. Their size, however, is often a limiting factor. Considering the widespread adaptation of mobile virtual and augmented reality applications, efficient storage of textures has become an important factor.\n We present an approach to analyse textures of a given mesh and compute a new set of textures with the goal of improving storage efficiency and reducing memory requirements. During this process the texture coordinates of the mesh are updated as required. Textures are analysed based on the UV-coordinates of one or more meshes and deconstructed into per-triangle textures. These are further analysed to detect single coloured as well as identical per-triangle textures. Our approach aims to remove these redundancies in order to reduce the amount of memory required to store the texture data. After this analysis, the per-triangle textures are compiled into a new set of texture images of user defined size. Our algorithm aims to pack texture data as tightly as possible in order to reduce the memory requirements.",
"title": ""
},
{
"docid": "7874a6681c45d87345197245e1e054fe",
"text": "The continuous processing of streaming data has become an important aspect in many applications. Over the last years a variety of different streaming platforms has been developed and a number of open source frameworks is available for the implementation of streaming applications. In this report, we will survey the landscape of existing streaming platforms. Starting with an overview of the evolving developments in the recent past, we will discuss the requirements of modern streaming architectures and present the ways these are approached by the different frameworks.",
"title": ""
},
{
"docid": "8decac4ff789460595664a38e7527ed6",
"text": "Unit selection synthesis has shown itself to be capable of producing high quality natural sounding synthetic speech when constructed from large databases of well-recorded, well-labeled speech. However, the cost in time and expertise of building such voices is still too expensive and specialized to be able to build individual voices for everyone. The quality in unit selection synthesis is directly related to the quality and size of the database used. As we require our speech synthesizers to have more variation, style and emotion, for unit selection synthesis, much larger databases will be required. As an alternative, more recently we have started looking for parametric models for speech synthesis, that are still trained from databases of natural speech but are more robust to errors and allow for better modeling of variation. This paper presents the CLUSTERGEN synthesizer which is implemented within the Festival/FestVox voice building environment. As well as the basic technique, three methods of modeling dynamics in the signal are presented and compared: a simple point model, a basic trajectory model and a trajectory model with overlap and add.",
"title": ""
},
{
"docid": "c99e4708a72c08569c25423efbe67775",
"text": "Predicting the next activity of a running process is an important aspect of process management. Recently, artificial neural networks, so called deep-learning approaches, have been proposed to address this challenge. This demo paper describes a software application that applies the Tensorflow deep-learning framework to process prediction. The software application reads industry-standard XES files for training and presents the user with an easy-to-use graphical user interface for both training and prediction. The system provides several improvements over earlier work. This demo paper focuses on the software implementation and describes the architecture and user interface.",
"title": ""
},
{
"docid": "08ca7be2334de477905e8766c8612c8f",
"text": "a r t i c l e i n f o a b s t r a c t A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.",
"title": ""
},
{
"docid": "fb8e6eac761229fc8c12339fb68002ed",
"text": "Cerebrovascular disease results from any pathological process of the blood vessels supplying the brain. Stroke, characterised by its abrupt onset, is the third leading cause of death in humans. This rare condition in dogs is increasingly being recognised with the advent of advanced diagnostic imaging. Magnetic resonance imaging (MRI) is the first choice diagnostic tool for stroke, particularly using diffusion-weighted images and magnetic resonance angiography for ischaemic stroke and gradient echo sequences for haemorrhagic stroke. An underlying cause is not always identified in either humans or dogs. Underlying conditions that may be associated with canine stroke include hypothyroidism, neoplasia, sepsis, hypertension, parasites, vascular malformation and coagulopathy. Treatment is mainly supportive and recovery often occurs within a few weeks. The prognosis is usually good if no underlying disease is found.",
"title": ""
},
{
"docid": "66782c46d59dd9ef225e9f3ea0b47cfe",
"text": "Intraoperative vital signals convey a wealth of complex temporal information that can provide significant insights into a patient's physiological status during the surgery, as well as outcomes after the surgery. Our study involves the use of a deep recurrent neural network architecture to predict patient's outcomes after the surgery, as well as to predict the immediate changes in the intraoperative signals during the surgery. More specifically, we will use a Long Short-Term Memory (LSTM) model which is a gated deep recurrent neural network architecture. We have performed two experiments on a large intraoperative dataset of 12,036 surgeries containing information on 7 intraoperative signals including body temperature, respiratory rate, heart rate, diastolic blood pressure, systolic blood pressure, fraction of inspired O2 and end-tidal CO2. We first evaluated the capability of LSTM in predicting the immediate changes in intraoperative signals, and then we evaluated its performance on predicting each patient's length of stay outcome. Our experiments show the effectiveness of LSTM with promising results on both tasks compared to the traditional models.",
"title": ""
},
{
"docid": "4799b4aa7e936d88fef0bb1e1f95f401",
"text": "This article summarizes and reviews the literature on neonaticide, infanticide, and filicide. A literature review was conducted using the Medline database: the cue terms neonaticide, infanticide, and filicide were searched. One hundred-fifteen articles were reviewed; of these, 51 are cited in our article. We conclude that while infanticide dates back to the beginning of recorded history, little is known about what causes parents to murder their children. To this end, further research is needed to identify potential perpetrators and to prevent subsequent acts of child murder by a parent.",
"title": ""
},
{
"docid": "852c85ecbed639ea0bfe439f69fff337",
"text": "In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.",
"title": ""
},
{
"docid": "b752f0f474b8f275f09d446818647564",
"text": "n engl j med 377;15 nejm.org October 12, 2017 4. Aysola J, Tahirovic E, Troxel AB, et al. A randomized controlled trial of opt-in versus opt-out enrollment into a diabetes behavioral intervention. Am J Health Promot 2016 October 21 (Epub ahead of print). 5. Mehta SJ, Troxel AB, Marcus N, et al. Participation rates with opt-out enrollment in a remote monitoring intervention for patients with myocardial infarction. JAMA Cardiol 2016; 1: 847-8. DOI: 10.1056/NEJMp1707991",
"title": ""
},
{
"docid": "ec6c62f25c987446522b49840c4242d7",
"text": "Have you ever been in a sauna? If yes, according to our recent survey conducted on Amazon Mechanical Turk, people who go to saunas are more likely to know that Mike Stonebraker is not a character in “The Simpsons”. While this result clearly makes no sense, recently proposed tools to automatically suggest visualizations, correlations, or perform visual data exploration, significantly increase the chance that a user makes a false discovery like this one. In this paper, we first show how current tools mislead users to consider random fluctuations as significant discoveries. We then describe our vision and early results for QUDE, a new system for automatically controlling the various risk factors during the data exploration process.",
"title": ""
},
{
"docid": "c9582409212e6f9b194175845216b2b6",
"text": "Although the amygdala complex is a brain area critical for human behavior, knowledge of its subspecialization is primarily derived from experiments in animals. We here employed methods for large-scale data mining to perform a connectivity-derived parcellation of the human amygdala based on whole-brain coactivation patterns computed for each seed voxel. Voxels within the histologically defined human amygdala were clustered into distinct groups based on their brain-wide coactivation maps. Using this approach, connectivity-based parcellation divided the amygdala into three distinct clusters that are highly consistent with earlier microstructural distinctions. Meta-analytic connectivity modelling then revealed the derived clusters' brain-wide connectivity patterns, while meta-data profiling allowed their functional characterization. These analyses revealed that the amygdala's laterobasal nuclei group was associated with coordinating high-level sensory input, whereas its centromedial nuclei group was linked to mediating attentional, vegetative, and motor responses. The often-neglected superficial nuclei group emerged as particularly sensitive to olfactory and probably social information processing. The results of this model-free approach support the concordance of structural, connectional, and functional organization in the human amygdala and point to the importance of acknowledging the heterogeneity of this region in neuroimaging research.",
"title": ""
},
{
"docid": "2c7bafac9d4c4fedc43982bd53c99228",
"text": "One of the uniqueness of business is for firm to be customer focus. Study have shown that this could be achieved through blockchain technology in enhancing customer loyalty programs (Michael J. Casey 2015; John Ream et al 2016; Sean Dennis 2016; James O'Brien and Dave Montali, 2016; Peiguss 2012; Singh, Khan, 2012; and among others). Recent advances in block chain technology have provided the tools for marketing managers to create a new generation of being able to assess the level of control companies want to have over customer data and activities as well as security/privacy issues that always arise with every additional participant of the network While block chain technology is still in the early stages of adoption, it could prove valuable for loyalty rewards program providers. Hundreds of blockchain initiatives are already underway in various industries, particularly airline services, even though standardization is far from a reality. One attractive feature of loyalty rewards is that they are not core to business revenue and operations and companies willing to implement blockchain for customer loyalty programs benefit lower administrative costs, improved customer experiences, and increased user engagement (Michael J. Casey, 2015; James O'Brien and Dave Montali 2016; Peiguss 2012; Singh, Abstract: In today business world, companies have accelerated the use of Blockchain technology to enhance the brand recognition of their products and services. Company believes that the integration of Blockchain into the current business marketing strategy will enhance the growth of their products, and thus acting as a customer loyalty solution. The goal of this study is to obtain a deep understanding of the impact of blockchain technology in enhancing customer loyalty programs of airline business. To achieve the goal of the study, a contextualized and literature based research instrument was used to measure the application of the investigated “constructs”, and a survey was conducted to collect data from the sample population. A convenience sample of total (450) Questionnaires were distributed to customers, and managers of the surveyed airlines who could be reached by the researcher. 274 to airline customers/passengers, and the remaining 176 to managers in the various airlines researched. Questionnaires with instructions were hand-delivered to respondents. Out of the 397 completed questionnaires returned, 359 copies were found usable for the present study, resulting in an effective response rate of 79.7%. The respondents had different social, educational, and occupational backgrounds. The research instrument showed encouraging evidence of reliability and validity. Data were analyzed using descriptive statistics, percentages and ttest analysis. The findings clearly show that there is significant evidence that blockchain technology enhance customer loyalty programs of airline business. It was discovered that Usage of blockchain technology is emphasized by the surveyed airlines operators in Nigeria., the extent of effective usage of customer loyalty programs is related to blockchain technology, and that he level or extent of effective usage of blockchain technology does affect the achievement of customer loyalty program goals and objectives. Feedback from the research will assist to expand knowledge as to the usefulness of blockchain technology being a customer loyalty solution.",
"title": ""
},
{
"docid": "f65c3e60dbf409fa2c6e58046aad1e1c",
"text": "The gut microbiota is essential for the development and regulation of the immune system and the metabolism of the host. Germ-free animals have altered immunity with increased susceptibility to immunologic diseases and show metabolic alterations. Here, we focus on two of the major immune-mediated microbiota-influenced components that signal far beyond their local environment. First, the activation or suppression of the toll-like receptors (TLRs) by microbial signals can dictate the tone of the immune response, and they are implicated in regulation of the energy homeostasis. Second, we discuss the intestinal mucosal surface is an immunologic component that protects the host from pathogenic invasion, is tightly regulated with regard to its permeability and can influence the systemic energy balance. The short chain fatty acids are a group of molecules that can both modulate the intestinal barrier and escape the gut to influence systemic health. As modulators of the immune response, the microbiota-derived signals influence functions of distant organs and can change susceptibility to metabolic diseases.",
"title": ""
},
{
"docid": "a8e3fd9ddfdb1eaea980246489579812",
"text": "With modern computer graphics, we can generate enormous amounts of 3D scene data. It is now possible to capture high-quality 3D representations of large real-world environments. Large shape and scene databases, such as the Trimble 3D Warehouse, are publicly accessible and constantly growing. Unfortunately, while a great amount of 3D content exists, most of it is detached from the semantics and functionality of the objects it represents. In this paper, we present a method to establish a correlation between the geometry and the functionality of 3D environments. Using RGB-D sensors, we capture dense 3D reconstructions of real-world scenes, and observe and track people as they interact with the environment. With these observations, we train a classifier which can transfer interaction knowledge to unobserved 3D scenes. We predict a likelihood of a given action taking place over all locations in a 3D environment and refer to this representation as an action map over the scene. We demonstrate prediction of action maps in both 3D scans and virtual scenes. We evaluate our predictions against ground truth annotations by people, and present an approach for characterizing 3D scenes by functional similarity using action maps.",
"title": ""
},
{
"docid": "87e732240f00b112bf2bb44af0ff8ca1",
"text": "Spoken Dialogue Systems (SDS) are man-machine interfaces which use natural language as the medium of interaction. Dialogue corpora collection for the purpose of training and evaluating dialogue systems is an expensive process. User simulators aim at simulating human users in order to generate synthetic data. Existing methods for user simulation mainly focus on generating data with the same statistical consistency as in some reference dialogue corpus. This paper outlines a novel approach for user simulation based on Inverse Reinforcement Learning (IRL). The task of building the user simulator is perceived as a task of imitation learning.",
"title": ""
},
{
"docid": "32f6db1bf35da397cd61d744a789d49c",
"text": "Mushroom poisoning is the main cause of mortality in food poisoning incidents in China. Although some responsible mushroom species have been identified, some were identified inaccuratly. This study investigated and analyzed 102 mushroom poisoning cases in southern China from 1994 to 2012, which involved 852 patients and 183 deaths, with an overall mortality of 21.48 %. The results showed that 85.3 % of poisoning cases occurred from June to September, and involved 16 species of poisonous mushroom: Amanita species (A. fuliginea, A. exitialis, A. subjunquillea var. alba, A. cf. pseudoporphyria, A. kotohiraensis, A. neoovoidea, A. gymnopus), Galerina sulciceps, Psilocybe samuiensis, Russula subnigricans, R. senecis, R. japonica, Chlorophyllum molybdites, Paxillus involutus, Leucocoprinus cepaestipes and Pulveroboletus ravenelii. Six species (A. subjunquillea var. alba, A. cf. pseudoporphyria, A. gymnopus, R. japonica, Psilocybe samuiensis and Paxillus involutus) are reported for the first time in poisoning reports from China. Psilocybe samuiensis is a newly recorded species in China. The genus Amanita was responsible for 70.49 % of fatalities; the main lethal species were A. fuliginea and A. exitialis. Russula subnigricans caused 24.59 % of fatalities, and five species showed mortality >20 % (A. fuliginea, A. exitialis, A. subjunquillea var. alba, R. subnigricans and Paxillus involutus). Mushroom poisoning symptoms were classified from among the reported clinical symptoms. Seven types of mushroom poisoning symptoms were identified for clinical diagnosis and treatment in China, including gastroenteritis, acute liver failure, acute renal failure, psychoneurological disorder, hemolysis, rhabdomyolysis and photosensitive dermatitis.",
"title": ""
},
{
"docid": "a6fc1c70b4bab666d5d580214fa3e09f",
"text": "Software designs decay as systems, uses, and operational environments evolve. Decay can involve the design patterns used to structure a system. Classes that participate in design pattern realizations accumulate grime—non-pattern-related code. Design pattern realizations can also rot, when changes break the structural or functional integrity of a design pattern. Design pattern rot can prevent a pattern realization from fulfilling its responsibilities, and thus represents a fault. Grime buildup does not break the structural integrity of a pattern but can reduce system testability and adaptability. This research examined the extent to which software designs actually decay, rot, and accumulate grime by studying the aging of design patterns in three successful object-oriented systems. We generated UML models from the three implementations and employed a multiple case study methodology to analyze the evolution of the designs. We found no evidence of design pattern rot in these systems. However, we found considerable evidence of pattern decay due to grime. Dependencies between design pattern components increased without regard for pattern intent, reducing pattern modularity, and decreasing testability and adaptability. The study of decay and grime showed that the grime that builds up around design patterns is mostly due to increases in coupling.",
"title": ""
},
{
"docid": "998bf65b2e95db90eb9fab8e13b47ff6",
"text": "Recently, deep neural networks (DNNs) have been regarded as the state-of-the-art classification methods in a wide range of applications, especially in image classification. Despite the success, the huge number of parameters blocks its deployment to situations with light computing resources. Researchers resort to the redundancy in the weights of DNNs and attempt to find how fewer parameters can be chosen while preserving the accuracy at the same time. Although several promising results have been shown along this research line, most existing methods either fail to significantly compress a well-trained deep network or require a heavy fine-tuning process for the compressed network to regain the original performance. In this paper, we propose the Block Term networks (BT-nets) in which the commonly used fully-connected layers (FC-layers) are replaced with block term layers (BT-layers). In BT-layers, the inputs and the outputs are reshaped into two low-dimensional high-order tensors, then block-term decomposition is applied as tensor operators to connect them. We conduct extensive experiments on benchmark datasets to demonstrate that BT-layers can achieve a very large compression ratio on the number of parameters while preserving the representation power of the original FC-layers as much as possible. Specifically, we can get a higher performance while requiring fewer parameters compared with the tensor train method.",
"title": ""
}
] |
scidocsrr
|
c6b856db07d45a093186b5c5a651d2b1
|
BUILDING INFORMATION MODELLING FOR CULTURAL HERITAGE : A REVIEW
|
[
{
"docid": "47cf10951d13e1da241a5551217aa2d5",
"text": "Despite the widespread adoption of building information modelling (BIM) for the design and lifecycle management of new buildings, very little research has been undertaken to explore the value of BIM in the management of heritage buildings and cultural landscapes. To that end, we are investigating the construction of BIMs that incorporate both quantitative assets (intelligent objects, performance data) and qualitative assets (historic photographs, oral histories, music). Further, our models leverage the capabilities of BIM software to provide a navigable timeline that chronicles tangible and intangible changes in the past and projections into the future. In this paper, we discuss three projects undertaken by the authors that explore an expanded role for BIM in the documentation and conservation of architectural heritage. The projects range in scale and complexity and include: a cluster of three, 19th century heritage buildings in the urban core of Toronto, Canada; a 600 hectare village in rural, south-eastern Ontario with significant modern heritage value, and a proposed web-centered BIM database for materials and methods of construction specific to heritage conservation.",
"title": ""
}
] |
[
{
"docid": "a922051835f239db76be1dbb8edead3e",
"text": "Among the simplest and most intuitively appealing classes of nonprobabilistic classification procedures are those that weight the evidence of nearby sample observations most heavily. More specifically, one might wish to weight the evidence of a neighbor close to an unclassified observation more heavily than the evidence of another neighbor which is at a greater distance from the unclassified observation. One such classification rule is described which makes use of a neighbor weighting function for the purpose of assigning a class to an unclassified sample. The admissibility of such a rule is also considered.",
"title": ""
},
{
"docid": "959a43b6b851a4a255466296efac7299",
"text": "Technology in football has been debated by pundits, players and fans all over the world for the past decade. FIFA has recently commissioned the use of ‘Hawk-Eye’ and ‘Goal Ref’ goal line technology systems at the 2014 World Cup in Brazil. This paper gives an in depth evaluation of the possible technologies that could be used in football and determines the potential benefits and implications these systems could have on the officiating of football matches. The use of technology in other sports is analyzed to come to a conclusion as to whether officiating technology should be used in football. Will football be damaged by the loss of controversial incidents such as Frank Lampard’s goal against Germany at the 2010 World Cup? Will cost, accuracy and speed continue to prevent the use of officiating technology in football? Time will tell, but for now, any advancement in the use of technology in football will be met by some with discontent, whilst others see it as moving the sport into the 21 century.",
"title": ""
},
{
"docid": "3df12301c628a4b1fc9421c80b79b42b",
"text": "Cellular processes can only be understood as the dynamic interplay of molecules. There is a need for techniques to monitor interactions of endogenous proteins directly in individual cells and tissues to reveal the cellular and molecular architecture and its responses to perturbations. Here we report our adaptation of the recently developed proximity ligation method to examine the subcellular localization of protein-protein interactions at single-molecule resolution. Proximity probes—oligonucleotides attached to antibodies against the two target proteins—guided the formation of circular DNA strands when bound in close proximity. The DNA circles in turn served as templates for localized rolling-circle amplification (RCA), allowing individual interacting pairs of protein molecules to be visualized and counted in human cell lines and clinical specimens. We used this method to show specific regulation of protein-protein interactions between endogenous Myc and Max oncogenic transcription factors in response to interferon-γ (IFN-γ) signaling and low-molecular-weight inhibitors.",
"title": ""
},
{
"docid": "993d7ee2498f7b19ae70850026c0a0c4",
"text": "We present ALL-IN-1, a simple model for multilingual text classification that does not require any parallel data. It is based on a traditional Support Vector Machine classifier exploiting multilingual word embeddings and character n-grams. Our model is simple, easily extendable yet very effective, overall ranking 1st (out of 12 teams) in the IJCNLP 2017 shared task on customer feedback analysis in four languages: English, French, Japanese and Spanish.",
"title": ""
},
{
"docid": "b15078182915859c3eab4b174115cd0f",
"text": "We consider retrieving a specific temporal segment, or moment, from a video given a natural language text description. Methods designed to retrieve whole video clips with natural language determine what occurs in a video but not when. To address this issue, we propose the Moment Context Network (MCN) which effectively localizes natural language queries in videos by integrating local and global video features over time. A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment. Therefore, we collect the Distinct Describable Moments (DiDeMo) dataset which consists of over 10,000 unedited, personal videos in diverse visual settings with pairs of localized video segments and referring expressions. We demonstrate that MCN outperforms several baseline methods and believe that our initial results together with the release of DiDeMo will inspire further research on localizing video moments with natural language.",
"title": ""
},
{
"docid": "0277fd19009088f84ce9f94a7e942bc1",
"text": "These study it is necessary to can be used as a theoretical foundation upon which to base decision-making and strategic thinking about e-learning system. This paper proposes a new framework for assessing readiness of an organization to implement the e-learning system project on the basis of McKinsey 7S model using fuzzy logic analysis. The study considers 7 dimensions as approach to assessing the current situation of the organization prior to system implementation to identify weakness areas which may encounter the project with failure. Adopted was focus on Questionnaires and group interviews to specific data collection from three colleges in Mosul University in Iraq. This can be achieved success in building an e-learning system at the University of Mosul by readiness assessment according to the model of multidimensional based on the framework of 7S is selected by 23 factors, and thus can avoid failures or weaknesses facing the implementation process before the start of the project and a step towards enabling the administration to make decisions that achieve success in this area, as well as to avoid the high cost associated with the implementation process.",
"title": ""
},
{
"docid": "458e4b5196805b608e15ee9c566123c9",
"text": "For the first half century of animal virology, the major problem was lack of a simple method for quantitating infectious virus particles; the only method available at that time was some form or other of the serial-dilution end-point method in animals, all of which were both slow and expensive. Cloned cultured animal cells, which began to be available around 1950, provided Dulbecco with a new approach. He adapted the technique developed by Emory Ellis and Max Delbrück for assaying bacteriophage, that is, seeding serial dilutions of a given virus population onto a confluent lawn of host cells, to the measurement of Western equine encephalitis virus, and demonstrated that it also formed easily countable plaques in monolayers of chick embryo fibroblasts. The impact of this finding was enormous; animal virologists had been waiting for such a technique for decades. It was immediately found to be widely applicable to many types of cells and most viruses, gained quick acceptance, and is widely regarded as marking the beginning of molecular animal virology. Renato Dulbecco was awarded the Nobel Prize in 1975. W. K. JOKLIK",
"title": ""
},
{
"docid": "e011ab57139a9a2f6dc13033b0ab6223",
"text": "Over the last few years, virtual reality (VR) has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate immersive experiences at low cost. One of the remaining challenges to further improve visual comfort in VR experiences is the vergence-accommodation conflict inherent to all stereoscopic displays. Accurate reproduction of all depth cues is crucial for visual comfort. By combining well-known stereoscopic display principles with emerging factored light field technology, we present the first wearable VR display supporting high image resolution as well as focus cues. A light field is presented to each eye, which provides more natural viewing experiences than conventional near-eye displays. Since the eye box is just slightly larger than the pupil size, rank-1 light field factorizations are sufficient to produce correct or nearly-correct focus cues; no time-multiplexed image display or gaze tracking is required. We analyze lens distortions in 4D light field space and correct them using the afforded high-dimensional image formation. We also demonstrate significant improvements in resolution and retinal blur quality over related near-eye displays. Finally, we analyze diffraction limits of these types of displays.",
"title": ""
},
{
"docid": "3aab2226cfdee4c6446090922fdd4f2d",
"text": "Information system and data mining are important resources for the investors to make decisions. Information theory pointed that the information is increasing all the time, when the corporations build their millions of databases in order to improve the efficiency. Database technology caters to the needs of fully developing the information resources. This essay discusses the problem of decision making support system and the application of business data mining in commercial decision making. It is recommended that the intelligent decision support system should be built. Besides, the business information used in the commercial decision making must follow the framework of a whole system under guideline, which should be designed by the company.",
"title": ""
},
{
"docid": "cd08ec6c25394b3304368952cf4fb99b",
"text": "Recently, several experimental studies have been conducted on block data layout as a data transformation technique used in conjunction with tiling to improve cache performance. In this paper, we provide a theoretical analysis for the TLB and cache performance of block data layout. For standard matrix access patterns, we derive an asymptotic lower bound on the number of TLB misses for any data layout and show that block data layout achieves this bound. We show that block data layout improves TLB misses by a factor of O B compared with conventional data layouts, where B is the block size of block data layout. This reduction contributes to the improvement in memory hierarchy performance. Using our TLB and cache analysis, we also discuss the impact of block size on the overall memory hierarchy performance. These results are validated through simulations and experiments on state-of-the-art platforms.",
"title": ""
},
{
"docid": "1caaac35c25cd9efb729b57e59c41be5",
"text": "The design of elastic file synchronization services like Dropbox is an open and complex issue yet not unveiled by the major commercial providers, as it includes challenges like fine-grained programmable elasticity and efficient change notification to millions of devices. In this paper, we propose a novel architecture for file synchronization which aims to solve the above two major challenges. At the heart of our proposal lies ObjectMQ, a lightweight framework for providing programmatic elasticity to distributed objects using messaging. The efficient use of indirect communication: i) enables programmatic elasticity based on queue message processing, ii) simplifies change notifications offering simple unicast and multicast primitives; and iii) provides transparent load balancing based on queues.\n Our reference implementation is StackSync, an open source elastic file synchronization Cloud service developed in the context of the FP7 project CloudSpaces. StackSync supports both predictive and reactive provisioning policies on top of ObjectMQ that adapt to real traces from the Ubuntu One service. The feasibility of our approach has been extensively validated with an open benchmark, including commercial synchronization services like Dropbox or OneDrive.",
"title": ""
},
{
"docid": "0bc1c637d6f4334dd8a27491ebde40d6",
"text": "Osteoarthritis of the hip describes a clinical syndrome of joint pain accompanied by varying degrees of functional limitation and reduced quality of life. Osteoarthritis may not be progressive and most patients will not need surgery, with their symptoms adequately controlled by non-surgical measures. The treatment of hip osteoarthritis is aimed at reducing pain and stiffness and improving joint mobility. Total hip replacement remains the most effective treatment option but it is a major surgery with potential serious complications. NICE guideline has suggested a holistic approach to management of hip osteoarthritis which includes both nonpharmacological and pharmacological treatments. The non-pharmacological treatments range from education ,physical therapy and behavioral changes ,walking aids .The ESCAPE( Enabling Self-Management and Coping of Arthritic Pain Through Exercise) rehabilitation programme for hip and knee osteoarthritis which integrates simple education, self-management and coping strategies, with an exercise regimen has shown to be more cost-effective than usual care. There is a choice of reviewed pharmacological treatments available, but there are few current reviews of possible nonpharmacological methods. This review will focus on the non-pharmacological and non-surgical methods.",
"title": ""
},
{
"docid": "51f5ba274068c0c03e5126bda056ba98",
"text": "Electricity is conceivably the most multipurpose energy carrier in modern global economy, and therefore primarily linked to human and economic development. Energy sector reform is critical to sustainable energy development and includes reviewing and reforming subsidies, establishing credible regulatory frameworks, developing policy environments through regulatory interventions, and creating marketbased approaches. Energy security has recently become an important policy driver and privatization of the electricity sector has secured energy supply and provided cheaper energy services in some countries in the short term, but has led to contrary effects elsewhere due to increasing competition, resulting in deferred investments in plant and infrastructure due to longer-term uncertainties. On the other hand global dependence on fossil fuels has led to the release of over 1100 GtCO2 into the atmosphere since the mid-19th century. Currently, energy-related GHG emissions, mainly from fossil fuel combustion for heat supply, electricity generation and transport, account for around 70% of total emissions including carbon dioxide, methane and some traces of nitrous oxide. This multitude of aspects play a role in societal debate in comparing electricity generating and supply options, such as cost, GHG emissions, radiological and toxicological exposure, occupational health and safety, employment, domestic energy security, and social impressions. Energy systems engineering provides a methodological scientific framework to arrive at realistic integrated solutions to complex energy problems, by adopting a holistic, systems-based approach, especially at decision making and planning stage. Modeling and optimization found widespread applications in the study of physical and chemical systems, production planning and scheduling systems, location and transportation problems, resource allocation in financial systems, and engineering design. This article reviews the literature on power and supply sector developments and analyzes the role of modeling and optimization in this sector as well as the future prospective of optimization modeling as a tool for sustainable energy systems. © 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "486978346e7a77f66e3ccce6f07fb346",
"text": "In this paper, we present a novel structure, Semi-AutoEncoder, based on AutoEncoder. We generalize it into a hybrid collaborative filtering model for rating prediction as well as personalized top-n recommendations. Experimental results on two real-world datasets demonstrate its state-of-the-art performances.",
"title": ""
},
{
"docid": "16a6c26d6e185be8383c062c6aa620f8",
"text": "In this research, we suggested a vision-based traffic accident detection system for automatically detecting, recording, and reporting traffic accidents at intersections. This model first extracts the vehicles from the video image of CCD camera, tracks the moving vehicles, and extracts features such as the variation rate of the velocity, position, area, and direction of moving vehicles. The model then makes decisions on the traffic accident based on the extracted features. And we suggested and designed the metadata registry for the system to improve the interoperability. In the field test, 4 traffic accidents were detected and recorded by the system. The video clips are invaluable for intersection safety analysis.",
"title": ""
},
{
"docid": "1fc468d42d432f716b3518dbba268db5",
"text": "In this paper a fast sweeping method for computing the numerical solution of Eikonal equations on a rectangular grid is presented. The method is an iterative method which uses upwind difference for discretization and uses Gauss-Seidel iterations with alternating sweeping ordering to solve the discretized system. The crucial idea is that each sweeping ordering follows a family of characteristics of the corresponding Eikonal equation in a certain direction simultaneously. The method has an optimal complexity of O(N) for N grid points and is extremely simple to implement in any number of dimensions. Monotonicity and stability properties of the fast sweeping algorithm are proven. Convergence and error estimates of the algorithm for computing the distance function is studied in detail. It is shown that 2n Gauss-Seidel iterations is enough for the distance function in n dimensions. An estimation of the number of iterations for general Eikonal equations is also studied. Numerical examples are used to verify the analysis.",
"title": ""
},
{
"docid": "5744e87741b6154b333e0f24bb17f0ea",
"text": "We describe two new related resources that facilitate modelling of general knowledge reasoning in 4th grade science exams. The first is a collection of curated facts in the form of tables, and the second is a large set of crowd-sourced multiple-choice questions covering the facts in the tables. Through the setup of the crowd-sourced annotation task we obtain implicit alignment information between questions and tables. We envisage that the resources will be useful not only to researchers working on question answering, but also to people investigating a diverse range of other applications such as information extraction, question parsing, answer type identification, and lexical semantic modelling.",
"title": ""
},
{
"docid": "7e6a3a04c24a0fc24012619d60ebb87b",
"text": "The recent trend toward democratization in countries throughout the globe has challenged scholars to pursue two potentially contradictory goals: to develop a differentiated conceptualization of democracy that captures the diverse experiences of these countries; and to extend the analysis to this broad range of cases without ‘stretching’ the concept. This paper argues that this dual challenge has led to a proliferation of conceptual innovations, including hundreds of subtypes of democracy—i.e., democracy ‘with adjectives.’ The paper explores the strengths and weaknesses of three important strategies of innovation that have emerged: ‘precising’ the definition of democracy; shifting the overarching concept with which democracy is associated; and generating various forms of subtypes. Given the complex structure of meaning produced by these strategies for refining the concept of democracy, we conclude by offering an old piece of advice with renewed urgency: It is imperative that scholars situate themselves in relation to this structure of meaning by clearly defining and explicating the conception of democracy they are employing.",
"title": ""
},
{
"docid": "5ea5650e03be82a600159c2095c387b6",
"text": "The medicinal plants are widely used by the traditional medicinal practitioners for curing various diseases in their day to day practice. In traditional system of medicine, different parts (leaves, stem, flower, root, seeds and even whole plant) of Ocimum sanctum Linn. have been recommended for the treatment of bronchitis, malaria, diarrhea, dysentery, skin disease, arthritis, eye diseases, insect bites and so on. The O. sanctum L. has also been suggested to possess anti-fertility, anticancer, antidiabetic, antifungal, antimicrobial, cardioprotective, analgesic, antispasmodic and adaptogenic actions. Eugenol (1-hydroxy-2-methoxy-4-allylbenzene), the active constituents present in O. sanctum L. have been found to be largely responsible for the therapeutic potentials. The pharmacological studies reported in the present review confirm the therapeutic value of O. sanctum L. The results of the above studies support the use of this plant for human and animal disease therapy and reinforce the importance of the ethno-botanical approach as a potential source of bioactive substances.",
"title": ""
},
{
"docid": "1830c839960f8ce9b26c906cc21e2a39",
"text": "This comparative review highlights the relationships between the disciplines of bloodstain pattern analysis (BPA) in forensics and that of fluid dynamics (FD) in the physical sciences. In both the BPA and FD communities, scientists study the motion and phase change of a liquid in contact with air, or with other liquids or solids. Five aspects of BPA related to FD are discussed: the physical forces driving the motion of blood as a fluid; the generation of the drops; their flight in the air; their impact on solid or liquid surfaces; and the production of stains. For each of these topics, the relevant literature from the BPA community and from the FD community is reviewed. Comments are provided on opportunities for joint BPA and FD research, and on the development of novel FD-based tools and methods for BPA. Also, the use of dimensionless numbers is proposed to inform BPA analyses.",
"title": ""
}
] |
scidocsrr
|
b8e2d6086985467637691a1160afc12b
|
An activity guideline for technology roadmapping implementation
|
[
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "3d81f003b29ad4cea90a533a002f3082",
"text": "Technology roadmapping is becoming an increasingly important and widespread approach for aligning technology with organizational goals. The popularity of roadmapping is due mainly to the communication and networking benefits that arise from the development and dissemination of roadmaps, particularly in terms of building common understanding across internal and external organizational boundaries. From its origins in Motorola and Corning more than 25 years ago, where it was used to link product and technology plans, the approach has been adapted for many different purposes in a wide variety of sectors and at all levels, from small enterprises to national foresight programs. Building on previous papers presented at PICMET, concerning the rapid initiation of the technique, and how to customize the approach, this paper highlights the evolution and continuing growth of the method and its application to general strategic planning. The issues associated with extending the roadmapping method to form a central element of an integrated strategic planning process are considered.",
"title": ""
}
] |
[
{
"docid": "e8ba260c18576f7f8b9f90afed0348e5",
"text": "This paper is aimed at recognition of offline handwritten characters in a given scanned text document with the help of neural networks. Image preprocessing, segmentation and feature extraction are various phases involved in character recognition. The first step is image acquisition followed by noise filtering, smoothing and image normalization of scanned image. Segmentation decomposes image into sub images and feature extraction extracts features from input image. Neural Network is created and trained to classify and recognize handwritten characters.",
"title": ""
},
{
"docid": "bed6069b49afd9c238267c6a276f1ede",
"text": "Today's top high performance computing systems run applications with hundreds of thousands of processes, contain hundreds of storage nodes, and must meet massive I/O requirements for capacity and performance. These leadership-class systems face daunting challenges to deploying scalable I/O systems. In this paper we present a case study of the I/O challenges to performance and scalability on Intrepid, the IBM Blue Gene/P system at the Argonne Leadership Computing Facility. Listed in the top 5 fastest supercomputers of 2008, Intrepid runs computational science applications with intensive demands on the I/O system. We show that Intrepid's file and storage system sustain high performance under varying workloads as the applications scale with the number of processes.",
"title": ""
},
{
"docid": "58612d7c22f6bd0bf1151b7ca5da0f7c",
"text": "In this paper we present a novel method for clustering words in micro-blogs, based on the similarity of the related temporal series. Our technique, named SAX*, uses the Symbolic Aggregate ApproXimation algorithm to discretize the temporal series of terms into a small set of levels, leading to a string for each. We then define a subset of “interesting” strings, i.e. those representing patterns of collective attention. Sliding temporal windows are used to detect co-occurring clusters of tokens with the same or similar string. To assess the performance of the method we first tune the model parameters on a 2-month 1 % Twitter stream, during which a number of world-wide events of differing type and duration (sports, politics, disasters, health, and celebrities) occurred. Then, we evaluate the quality of all discovered events in a 1-year stream, “googling” with the most frequent cluster n-grams and manually assessing how many clusters correspond to published news in the same temporal slot. Finally, we perform a complexity evaluation and we compare SAX* with three alternative methods for event discovery. Our evaluation shows that SAX* is at least one order of magnitude less complex than other temporal and non-temporal approaches to micro-blog clustering.",
"title": ""
},
{
"docid": "0857e32201b675c3e971c6caba8d2087",
"text": "Western tonal music relies on a formal geometric structure that determines distance relationships within a harmonic or tonal space. In functional magnetic resonance imaging experiments, we identified an area in the rostromedial prefrontal cortex that tracks activation in tonal space. Different voxels in this area exhibited selectivity for different keys. Within the same set of consistently activated voxels, the topography of tonality selectivity rearranged itself across scanning sessions. The tonality structure was thus maintained as a dynamic topography in cortical areas known to be at a nexus of cognitive, affective, and mnemonic processing.",
"title": ""
},
{
"docid": "5f17432d235a991a5544ad794875a919",
"text": "We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially observable Markov decision processes (POMDPs) provide a rich mathematical model to handle such environments but require a known model to be solved by most approaches. This is a limitation in practice as the exact model parameters are often difficult to specify exactly. We adopt a Bayesian approach where a posterior distribution over the model parameters is maintained and updated through experience with the environment. We propose a particle filter algorithm to maintain the posterior distribution and an online planning algorithm, based on trajectory sampling, to plan the best action to perform under the current posterior. The resulting approach selects control actions which optimally trade-off between 1) exploring the environment to learn the model, 2) identifying the system's state, and 3) exploiting its knowledge in order to maximize long-term rewards. Our preliminary results on a simulated robot navigation problem show that our approach is able to learn good models of the sensors and actuators, and performs as well as if it had the true model.",
"title": ""
},
{
"docid": "73a5466e9e471a015c601f75d2147ace",
"text": "In this paper we have proposed, developed and tested a hardware module based on Arduino Uno Board and Zigbee wireless technology, which measures the meteorological data, including air temperature, dew point temperature, barometric pressure, relative humidity, wind speed and wind direction. This information is received by a specially designed application interface running on a PC connected through Zigbee wireless link. The proposed system is also a mathematical model capable of generating short time local alerts based on the current weather parameters. It gives an on line and real time effect. We have also compared the data results of the proposed system with the data values of Meteorological Station Chandigarh and Snow & Avalanche Study Establishment Chandigarh Laboratory. The results have come out to be very precise. The idea behind to this work is to monitor the weather parameters, weather forecasting, condition mapping and warn the people from its disastrous effects.",
"title": ""
},
{
"docid": "28ba1eddc74c930350e1b2df5931fa39",
"text": "In this paper, the problem of how to implement the MTPA/MTPV control for an energy efficient operation of a high speed Interior Permanent Magnet Synchronous Motor (IPMSM) used as traction drive is considered. This control method depends on the inductances Ld, Lq, the flux linkage ΨPM and the stator resistance Rs which might vary during operation. The parameter variation causes miscalculation of the set point currents Id and Iq for the inner current control system and thus a wrong torque will be set. Consequently the IPMSM will not be operating in the optimal operation point which yields to a reduction of the total energy efficiency and the performance. As a consequence, this paper proposes the implementation of the the Recursive Least Square Estimation (RLS) for a high speed and high performance IPMSM. With this online identification method the variable parameters are estimated and adapted to the MTPA and MTPV control strategy.",
"title": ""
},
{
"docid": "07d8df7d895f0af5e76bd0d5980055fb",
"text": "Debate over euthanasia is not a recent phenomenon. Over the years, public opinion, decisions of courts, and legal and medical approaches to the issue of euthanasia has been conflicting. The connection between murder and euthanasia has been attempted in a few debates. Although it is widely accepted that murder is a crime, a clearly defined stand has not been taken on euthanasia. This article considers euthanasia from the medical, legal, and global perspectives and discusses the crime of murder in relation to euthanasia, taking into consideration the issue of consent in the law of crime. This article concludes that in the midst of this debate on euthanasia and murder, the important thing is that different countries need to find their own solution to the issue of euthanasia rather than trying to import solutions from other countries.",
"title": ""
},
{
"docid": "aaafdd0e0690fc253ecc9c0059b0d417",
"text": "With the discovery of the polymerase chain reaction (PCR) in the mid-1980's, the last in a series of critical molecular biology techniques (to include the isolation of DNA from human and non-human biological material, and primary sequence analysis of DNA) had been developed to rapidly analyze minute quantities of mitochondrial DNA (mtDNA). This was especially true for mtDNA isolated from challenged sources, such as ancient or aged skeletal material and hair shafts. One of the beneficiaries of this work has been the forensic community. Over the last decade, a significant amount of research has been conducted to develop PCR-based sequencing assays for the mtDNA control region (CR), which have subsequently been used to further characterize the CR. As a result, the reliability of these assays has been investigated, the limitations of the procedures have been determined, and critical aspects of the analysis process have been identified, so that careful control and monitoring will provide the basis for reliable testing. With the application of these assays to forensic identification casework, mtDNA sequence analysis has been properly validated, and is a reliable procedure for the examination of biological evidence encountered in forensic criminalistic cases.",
"title": ""
},
{
"docid": "6a383d8026b500d3365f3a668bafc732",
"text": "In the era of deep sub-wavelength lithography for nanometer VLSI designs, manufacturability and yield issues are critical and need to be addressed during the key physical design implementation stage, in particular detailed routing. However, most existing studies for lithography-friendly routing suffer from either huge run-time due to the intensive lithographic computations involved, or severe loss of quality of results because of the inaccurate predictive models. In this paper, we propose AENEID - a fast, generic and high performance lithography-friendly detailed router for enhanced manufacturability. AENEID combines novel hotspot detection and routing path prediction techniques through modern data learning methods and applies them at the detailed routing stage to drive high fidelity lithography-friendly routing. Compared with existing litho-friendly routing works, AENEID demonstrates 26% to 66% (avg. 50%) of lithography hotspot reduction at the cost of only 18%-38% (avg. 30%) of run-time overhead.",
"title": ""
},
{
"docid": "fad8cf15678cccbc727e9fba6292474d",
"text": "OBJECTIVE\nClinical records contain significant medical information that can be useful to researchers in various disciplines. However, these records also contain personal health information (PHI) whose presence limits the use of the records outside of hospitals. The goal of de-identification is to remove all PHI from clinical records. This is a challenging task because many records contain foreign and misspelled PHI; they also contain PHI that are ambiguous with non-PHI. These complications are compounded by the linguistic characteristics of clinical records. For example, medical discharge summaries, which are studied in this paper, are characterized by fragmented, incomplete utterances and domain-specific language; they cannot be fully processed by tools designed for lay language.\n\n\nMETHODS AND RESULTS\nIn this paper, we show that we can de-identify medical discharge summaries using a de-identifier, Stat De-id, based on support vector machines and local context (F-measure=97% on PHI). Our representation of local context aids de-identification even when PHI include out-of-vocabulary words and even when PHI are ambiguous with non-PHI within the same corpus. Comparison of Stat De-id with a rule-based approach shows that local context contributes more to de-identification than dictionaries combined with hand-tailored heuristics (F-measure=85%). Comparison with two well-known named entity recognition (NER) systems, SNoW (F-measure=94%) and IdentiFinder (F-measure=36%), on five representative corpora show that when the language of documents is fragmented, a system with a relatively thorough representation of local context can be a more effective de-identifier than systems that combine (relatively simpler) local context with global context. Comparison with a Conditional Random Field De-identifier (CRFD), which utilizes global context in addition to the local context of Stat De-id, confirms this finding (F-measure=88%) and establishes that strengthening the representation of local context may be more beneficial for de-identification than complementing local with global context.",
"title": ""
},
{
"docid": "8e02a76799f72d86e7240384bea563fd",
"text": "We have developed the suspended-load backpack, which converts mechanical energy from the vertical movement of carried loads (weighing 20 to 38 kilograms) to electricity during normal walking [generating up to 7.4 watts, or a 300-fold increase over previous shoe devices (20 milliwatts)]. Unexpectedly, little extra metabolic energy (as compared to that expended carrying a rigid backpack) is required during electricity generation. This is probably due to a compensatory change in gait or loading regime, which reduces the metabolic power required for walking. This electricity generation can help give field scientists, explorers, and disaster-relief workers freedom from the heavy weight of replacement batteries and thereby extend their ability to operate in remote areas.",
"title": ""
},
{
"docid": "372f54e1aa5901c53b76939e9572ab74",
"text": "-We develop a technique to test the hypothesis that multilavered../~'ed@~rward network,~ with [~'w units on the .Drst hidden layer ,~eneralize better than networks with many ttllits in the ~irst laver. Large networks are trained to per/orrn a class![)cation task and the redundant units are removed (\"pruning\") to produce the smallest network capable of'perf'orming the task. A teclmiqtte ,/~r inserting layers u'here /~rtttlitlg has introduced linear inseparability is also described. Two tests Of abilio' to generalize are used--the ability to classiflv training inputs corrupwd hv noise and the ability to classtlflv new patterns/)ore each class. The hypothes'is is f?~ltnd to be ,fa{s'e f~>r networks trained with noisy input.s'. Pruning to the mittitnum nt#nber c~f units in the ./irvt layer produces twtworks which correctly classify the training ,set hut j,,eneralize poor O' compared with lar~er ttetworks. Keywords--Neural Networks, Back-propagation, Pattern recognition, Generalization, t|idden units. Pruning. I N T R O D U C T I O N One of the major strengths of artificial neural networks is their ability to recognize or correctly classify patterns which have never been presented to the network before. Neural networks appear unique in their ability to extract the essential features from a training set and use them to identify new inputs. It is not known how the size or structure of a network affects this quality. This work concerns layered, feed-forward networks learning a classification task by back-propagation. Our desire was to investigate the relationship between network structure and the ability of the network to generalize from the training set. We examined the effect of noise in the training set on network structure and generalization, and we examined the effects of network size. This second could not be done simply by training networks of different sizes, since trained networks are not necessarily making effective use of all their hidden units. To address the question of the effective size of a network, we have been developing a technique of training a network which is known or suspected to be larger than required and then trimming off excess units to obtain the smallest network (Sietsma & Dow, Acknowledgement : The authors wish to thank Mr. David A. Penington for his valuable contributions, both in ideas and in excellent computer programs. Requests for reprints should be sent to J. Sietsma, USD, Material Research Laboratory, P.O. Box 50, Ascot Vale, Victoria 3032, Australia. 1988). This is slower than training the \"right'\" size network from the start, but it proves to have a number of advantages. When we started we had some assumptions about the relationship between size and ability to generalize, which we wished to test. If a network can be trained successfully with a small number of units on the first processing layer, these units must be extracting features of the classes which can be compactly expressed by the units and interpreted by the higher layers. This compact coding might be expected to imply good generalization performance. To have many units in a layer can allow a network to become overspecific, approximating a look-up table, particularly in the extreme where the number of units in the first processing layer is equal to the number of examplars in the training set. It has been suggested that networks with n'.,we layers, and fewer units in the early layers, may generalize better than \"'shallow\" networks with many units in each layer (Rumelhart, 1988). However, narrow networks with many layers are far harder to train than broad networks of one or two layers. Sometimes we found that after rigorous removal of inessential units, more layers were required to perform the task. This suggested a way of producing long, narrow networks. A broad network could be trained, trimmed to the fewest possible units, and then extra layers inserted to enable the network to relearn the solution. This avoids both the training difficulties and the problem that the smallest number of units needed for a task is not generally known. We could then test",
"title": ""
},
{
"docid": "f06d083ebd1449b1fd84e826898c2fda",
"text": "The resolution of any linear imaging system is given by its point spread function (PSF) that quantifies the blur of an object point in the image. The sharper the PSF, the better the resolution is. In standard fluorescence microscopy, however, diffraction dictates a PSF with a cigar-shaped main maximum, called the focal spot, which extends over at least half the wavelength of light (λ = 400–700 nm) in the focal plane and >λ along the optical axis (z). Although concepts have been developed to sharpen the focal spot both laterally and axially, none of them has reached their ultimate goal: a spherical spot that can be arbitrarily downscaled in size. Here we introduce a fluorescence microscope that creates nearly spherical focal spots of 40–45 nm (λ/16) in diameter. Fully relying on focused light, this lens-based fluorescence nanoscope unravels the interior of cells noninvasively, uniquely dissecting their sub-λ–sized organelles.",
"title": ""
},
{
"docid": "ca4752a75f440dda1255a71764258a51",
"text": "Neurofeedback is a method for using neural activity displayed on a computer to regulate one's own brain function and has been shown to be a promising technique for training individuals to interact with brain-machine interface applications such as neuroprosthetic limbs. The goal of this study was to develop a user-friendly functional near-infrared spectroscopy (fNIRS)-based neurofeedback system to upregulate neural activity associated with motor imagery, which is frequently used in neuroprosthetic applications. We hypothesized that fNIRS neurofeedback would enhance activity in motor cortex during a motor imagery task. Twenty-two participants performed active and imaginary right-handed squeezing movements using an elastic ball while wearing a 98-channel fNIRS device. Neurofeedback traces representing localized cortical hemodynamic responses were graphically presented to participants in real time. Participants were instructed to observe this graphical representation and use the information to increase signal amplitude. Neural activity was compared during active and imaginary squeezing with and without neurofeedback. Active squeezing resulted in activity localized to the left premotor and supplementary motor cortex, and activity in the motor cortex was found to be modulated by neurofeedback. Activity in the motor cortex was also shown in the imaginary squeezing condition only in the presence of neurofeedback. These findings demonstrate that real-time fNIRS neurofeedback is a viable platform for brain-machine interface applications.",
"title": ""
},
{
"docid": "230924b74e7492d9999c1b2a134deac3",
"text": "The name ambiguity problem presents many challenges for scholar finding, citation analysis and other related research fields. To attack this issue, various disambiguation methods combined with separate disambiguation features have been put forward. In this paper, we offer an unsupervised Dempster–Shafer theory (DST) based hierarchical agglomerative clustering algorithm for author disambiguation tasks. Distinct from existing methods, we exploit the DST in combination with Shannon’s entropy to fuse various disambiguation features and come up with a more reliable candidate pair of clusters for amalgamation in each iteration of clustering. Also, some solutions to determine the convergence condition of the clustering process are proposed. Depending on experiments, our method outperforms three unsupervised models, and achieves comparable performances to a supervised model, while does not prescribe any hand-labelled training data.",
"title": ""
},
{
"docid": "082b1c341435ce93cfab869475ed32bd",
"text": "Given a graph where vertices are partitioned into k terminals and non-terminals, the goal is to compress the graph (i.e., reduce the number of non-terminals) using minor operations while preserving terminal distances approximately. The distortion of a compressed graph is the maximum multiplicative blow-up of distances between all pairs of terminals. We study the trade-off between the number of non-terminals and the distortion. This problem generalizes the Steiner Point Removal (SPR) problem, in which all non-terminals must be removed. We introduce a novel black-box reduction to convert any lower bound on distortion for the SPR problem into a super-linear lower bound on the number of non-terminals, with the same distortion, for our problem. This allows us to show that there exist graphs such that every minor with distortion less than 2 / 2.5 / 3 must have Ω(k2) / Ω(k5/4) / Ω(k6/5) non-terminals, plus more trade-offs in between. The black-box reduction has an interesting consequence: if the tight lower bound on distortion for the SPR problem is super-constant, then allowing any O(k) non-terminals will not help improving the lower bound to a constant. We also build on the existing results on spanners, distance oracles and connected 0-extensions to show a number of upper bounds for general graphs, planar graphs, graphs that exclude a fixed minor and bounded treewidth graphs. Among others, we show that any graph admits a minor with O(log k) distortion and O(k2) non-terminals, and any planar graph admits a minor with 1 + ε distortion and Õ((k/ε)2) non-terminals. 1998 ACM Subject Classification G.2.2 Graph Theory",
"title": ""
},
{
"docid": "88520d58d125e87af3d5ea6bb4335c4f",
"text": "We present an algorithm for marker-less performance capture of interacting humans using only three hand-held Kinect cameras. Our method reconstructs human skeletal poses, deforming surface geometry and camera poses for every time step of the depth video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Only the combination of geometric and photometric correspondences and the integration of human pose and camera pose estimation enables reliable performance capture with only three sensors. As opposed to previous performance capture methods, our algorithm succeeds on general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.",
"title": ""
},
{
"docid": "2a6aa350dd7ddc663aaaafe4d745845e",
"text": "Neural networks augmented with external memory have the ability to learn algorithmic solutions to complex tasks. These models appear promising for applications such as language modeling and machine translation. However, they scale poorly in both space and time as the amount of memory grows — limiting their applicability to real-world domains. Here, we present an end-to-end differentiable memory access scheme, which we call Sparse Access Memory (SAM), that retains the representational power of the original approaches whilst training efficiently with very large memories. We show that SAM achieves asymptotic lower bounds in space and time complexity, and find that an implementation runs 1,000⇥ faster and with 3,000⇥ less physical memory than non-sparse models. SAM learns with comparable data efficiency to existing models on a range of synthetic tasks and one-shot Omniglot character recognition, and can scale to tasks requiring 100,000s of time steps and memories. As well, we show how our approach can be adapted for models that maintain temporal associations between memories, as with the recently introduced Differentiable Neural Computer.",
"title": ""
},
{
"docid": "b1845c42902075de02c803e77345a30f",
"text": "Unsupervised representation learning algorithms such as word2vec and ELMo improve the accuracy of many supervised NLP models, mainly because they can take advantage of large amounts of unlabeled text. However, the supervised models only learn from taskspecific labeled data during the main training phase. We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data. On labeled examples, standard supervised learning is used. On unlabeled examples, CVT teaches auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) to match the predictions of the full model seeing the whole input. Since the auxiliary modules and the full model share intermediate representations, this in turn improves the full model. Moreover, we show that CVT is particularly effective when combined with multitask learning. We evaluate CVT on five sequence tagging tasks, machine translation, and dependency parsing, achieving state-of-the-art results.1",
"title": ""
}
] |
scidocsrr
|
4bdd8803192ea4cb8b47adefd6e45054
|
On-Line Mobile Robot Model Identification Using Integrated Perturbative Dynamics
|
[
{
"docid": "14827ea435d82e4bfe481713af45afed",
"text": "This paper introduces a model-based approach to estimating longitudinal wheel slip and detecting immobilized conditions of autonomous mobile robots operating on outdoor terrain. A novel tire traction/braking model is presented and used to calculate vehicle dynamic forces in an extended Kalman filter framework. Estimates of external forces and robot velocity are derived using measurements from wheel encoders, inertial measurement unit, and GPS. Weak constraints are used to constrain the evolution of the resistive force estimate based upon physical reasoning. Experimental results show the technique accurately and rapidly detects robot immobilization conditions while providing estimates of the robot's velocity during normal driving. Immobilization detection is shown to be robust to uncertainty in tire model parameters. Accurate immobilization detection is demonstrated in the absence of GPS, indicating the algorithm is applicable for both terrestrial applications and space robotics.",
"title": ""
}
] |
[
{
"docid": "c5b8c7fa8518595196aa48740578cb05",
"text": "Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5× less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models.",
"title": ""
},
{
"docid": "633cce3860a44e5931d93dc3e83f14f4",
"text": "The main theme of this paper is to present a new digital-controlled technique for battery charger to achieve constant current and voltage control while not requiring current feedback. The basic idea is to achieve constant current charging control by limiting the duty cycle of charger. Therefore, the current feedback signal is not required and thereby reducing the cost of A/D converter, current sensor, and computation complexity required for current control. Moreover, when the battery voltage is increased to the preset voltage level using constant current charge, the charger changes the control mode to constant voltage charge. A digital-controlled charger is designed and implemented for uninterrupted power supply (UPS) applications. The charger control is based upon the proposed control method in software. As a result, the UPS control, including boost converter, charger, and inverter control can be realized using only one low cost MCU. Experimental results demonstrate that the effectiveness of the design and implementation.",
"title": ""
},
{
"docid": "95f9547a510ca82b283c59560b5a93c6",
"text": "Human action recognition in videos is one of the most challenging tasks in computer vision. One important issue is how to design discriminative features for representing spatial context and temporal dynamics. Here, we introduce a path signature feature to encode information from intra-frame and inter-frame contexts. A key step towards leveraging this feature is to construct the proper trajectories (paths) for the data steam. In each frame, the correlated constraints of human joints are treated as small paths, then the spatial path signature features are extracted from them. In video data, the evolution of these spatial features over time can also be regarded as paths from which the temporal path signature features are extracted. Eventually, all these features are concatenated to constitute the input vector of a fully connected neural network for action classification. Experimental results on four standard benchmark action datasets, J-HMDB, SBU Dataset, Berkeley MHAD, and NTURGB+D demonstrate that the proposed approach achieves state-of-the-art accuracy even in comparison with recent deep learning based models.",
"title": ""
},
{
"docid": "a212a2969c0c72894dcde880bbf29fa7",
"text": "Machine learning is useful for building robust learning models, and it is based on a set of features that identify a state of an object. Unfortunately, some data sets may contain a large number of features making, in some cases, the learning process time consuming and the generalization capability of machine learning poor. To make a data set easy to learn and understand, it is typically recommended to remove the most irrelevant features from the set. However, choosing what data should be kept or eliminated may be performed by complex selection algorithms, and optimal feature selection may require an exhaustive search of all possible subsets of features which is computationally expensive. This paper proposes a simple method to perform feature selection using artificial neural networks. It is shown experimentally that genetic algorithms in combination with artificial neural networks can easily be used to extract those features that are required to produce a desired result. Experimental results show that very few hidden neurons are required for feature selection as artificial neural networks are only used to assess the quality of an individual, which is a chosen subset of features.",
"title": ""
},
{
"docid": "1f2832276b346316b15fe05d8593217c",
"text": "This paper presents a new method for generating inductive loop invariants that are expressible as boolean combinations of linear integer constraints. The key idea underlying our technique is to perform a backtracking search that combines Hoare-style verification condition generation with a logical abduction procedure based on quantifier elimination to speculate candidate invariants. Starting with true, our method iteratively strengthens loop invariants until they are inductive and strong enough to verify the program. A key feature of our technique is that it is lazy: It only infers those invariants that are necessary for verifying program correctness. Furthermore, our technique can infer arbitrary boolean combinations (including disjunctions) of linear invariants. We have implemented the proposed approach in a tool called HOLA. Our experiments demonstrate that HOLA can infer interesting invariants that are beyond the reach of existing state-of-the-art invariant generation tools.",
"title": ""
},
{
"docid": "a411780d406e8b720303d18cd6c9df68",
"text": "Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.",
"title": ""
},
{
"docid": "d462883de69e86cec8631d195a8a064d",
"text": "Micro Unmanned Aerial Vehicles (UAVs) such as quadrocopters have gained great popularity over the last years, both as a research platform and in various application fields. However, some complex application scenarios call for the formation of swarms consisting of multiple drones. In this paper a platform for the creation of such swarms is presented. It is based on commercially available quadrocopters enhanced with on-board processing and communication units enabling full autonomy of individual drones. Furthermore, a generic ground control station is presented that serves as integration platform. It allows the seamless coordination of different kinds of sensor platforms.",
"title": ""
},
{
"docid": "100ab34e96da2b8640bd97467e9c91e1",
"text": "Manual work is taken over the robot technology and many of the related robot appliances are being used extensively also. Here represents the technology that proposed the working of robot for Floor cleaning. This floor cleaner robot can work in any of two modes i.e. “Automatic and Manual”. All hardware and software operations are controlled by AT89S52 microcontroller. This robot can perform sweeping and mopping task. RF modules have been used for wireless communication between remote (manual mode) and robot and having range 50m. This robot is incorporated with IR sensor for obstacle detection and automatic water sprayer pump. Four motors are used, two for cleaning, one for water pump and one for wheels. Dual relay circuit used to drive the motors one for water pump and another for cleaner. In previous work, there was no automatic water sprayer used and works only in automatic mode. In the automatic mode robot control all the operations itself and change the lane in case of hurdle detection and moves back. In the manual mode, the keypad is used to perform the expected task and to operate robot. In manual mode, RF module has been used to transmit and receive the information between remote and robot and display the information related to the hurdle detection on LCD. The whole circuitry is connected with 12V battery.",
"title": ""
},
{
"docid": "9e32991f47d2d480ed35e488b85dfb79",
"text": "Convolutional Neural Networks (CNNs) are powerful models that achieve impressive results for image classification. In addition, pre-trained CNNs are also useful for other computer vision tasks as generic feature extractors [1]. This paper aims to gain insight into the feature aspect of CNN and demonstrate other uses of CNN features. Our results show that CNN feature maps can be used with Random Forests and SVM to yield classification results that outperforms the original CNN. A CNN that is less than optimal (e.g. not fully trained or overfitting) can also extract features for Random Forest/SVM that yield competitive classification accuracy. In contrast to the literature which uses the top-layer activations as feature representation of images for other tasks [1], using lower-layer features can yield better results for classification.",
"title": ""
},
{
"docid": "d752bf764e4518cee561b11146d951c4",
"text": "Speech recognition is an increasingly important input modality, especially for mobile computing. Because errors are unavoidable in real applications, efficient correction methods can greatly enhance the user experience. In this paper we study a reranking and classification strategy for choosing word alternates to display to the user in the framework of a tap-to-correct interface. By employing a logistic regression model to estimate the probability that an alternate will offer a useful correction to the user, we can significantly reduce the average length of the alternates lists generated with no reduction in the number of words they are able to correct.",
"title": ""
},
{
"docid": "edd78912d764ab33e0e1a8124bc7d709",
"text": "Natural language understanding and dialogue policy learning are both essential in conversational systems that predict the next system actions in response to a current user utterance. Conventional approaches aggregate separate models of natural language understanding (NLU) and system action prediction (SAP) as a pipeline that is sensitive to noisy outputs of error-prone NLU. To address the issues, we propose an end-to-end deep recurrent neural network with limited contextual dialogue memory by jointly training NLU and SAP on DSTC4 multi-domain human-human dialogues. Experiments show that our proposed model significantly outperforms the state-of-the-art pipeline models for both NLU and SAP, which indicates that our joint model is capable of mitigating the affects of noisy NLU outputs, and NLU model can be refined by error flows backpropagating from the extra supervised signals of system actions.",
"title": ""
},
{
"docid": "fe194d04f5bb78c5fa40e93fc6046b42",
"text": "Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, EnglishFrench and Chinese-to-English translation tasks.",
"title": ""
},
{
"docid": "75228d9fd5255ecb753ee3b465640d97",
"text": "To pave the way towards disclosing the full potential of 5G networking, emerging Mobile Edge Computing techniques are gaining momentum in both academic and industrial research as a means to enhance infrastructure scalability and reliability by moving control functions close to the edge of the network. After the promising results under achievement within the EU Mobile Cloud Networking project, we claim the suitability of deploying Evolved Packet Core (EPC) support solutions as a Service (EPCaaS) over a uniform edge cloud infrastructure of Edge Nodes, by following the concepts of Network Function Virtualization (NFV). This paper originally focuses on the support needed for efficient elasticity provisioning of EPCaaS stateful components, by proposing novel solutions for effective subscribers' state management in quality-constrained 5G scenarios. In particular, to favor flexibility and high-availability against network function failures, we have developed a state sharing mechanism across different data centers even in presence of firewall/network encapsulation. In addition, our solution can dynamically select which state portions should be shared and to which Edge Nodes. The reported experimental results, measured over the widely recognized Open5GCore testbed, demonstrate the feasibility and effectiveness of the approach, as well as its capability to satisfy \"carrier-grade\" quality requirements while ensuring good elasticity and scalability.",
"title": ""
},
{
"docid": "0c7eff3e7c961defce07b98914431414",
"text": "The navigational system of the mammalian cortex comprises a number of interacting brain regions. Grid cells in the medial entorhinal cortex and place cells in the hippocampus are thought to participate in the formation of a dynamic representation of the animal's current location, and these cells are presumably critical for storing the representation in memory. To traverse the environment, animals must be able to translate coordinate information from spatial maps in the entorhinal cortex and hippocampus into body-centered representations that can be used to direct locomotion. How this is done remains an enigma. We propose that the posterior parietal cortex is critical for this transformation.",
"title": ""
},
{
"docid": "cfa036aa6eb15b3634fae9a2f3f137da",
"text": "We present a high-efficiency transmitter based on asymmetric multilevel outphasing (AMO). AMO transmitters improve their efficiency over LINC (linear amplification using nonlinear components) transmitters by switching the output envelopes of the power amplifiers among a discrete set of levels. This minimizes the occurrence of large outphasing angles, reducing the energy lost in the power combiner. We demonstrate this concept with a 2.5-GHz, 20-dBm peak output power transmitter using 2-level AMO designed in a 65-nm CMOS process. To the authors' knowledge, this IC is the first integrated implementation of the AMO concept. At peak output power, the measured power-added efficiency is 27.8%. For a 16-QAM signal with 6.1dB peak-to-average power ratio, the AMO prototype improves the average efficiency from 4.7% to 10.0% compared to the standard LINC system.",
"title": ""
},
{
"docid": "af486334ab8cae89d9d8c1c17526d478",
"text": "Notifications are a core feature of mobile phones. They inform users about a variety of events. Users may take immediate action or ignore them depending on the importance of a notification as well as their current context. The nature of notifications is manifold, applications use them both sparsely and frequently. In this paper we present the first large-scale analysis of mobile notifications with a focus on users' subjective perceptions. We derive a holistic picture of notifications on mobile phones by collecting close to 200 million notifications from more than 40,000 users. Using a data-driven approach, we break down what users like and dislike about notifications. Our results reveal differences in importance of notifications and how users value notifications from messaging apps as well as notifications that include information about people and events. Based on these results we derive a number of findings about the nature of notifications and guidelines to effectively use them.",
"title": ""
},
{
"docid": "af1f047dca3a4d7cbd75c84e5d8d1552",
"text": "UNLABELLED\nAcupuncture is a therapeutic treatment that is defined as the insertion of needles into the body at specific points (ie, acupoints). Advances in functional neuroimaging have made it possible to study brain responses to acupuncture; however, previous studies have mainly concentrated on acupoint specificity. We wanted to focus on the functional brain responses that occur because of needle insertion into the body. An activation likelihood estimation meta-analysis was carried out to investigate common characteristics of brain responses to acupuncture needle stimulation compared to tactile stimulation. A total of 28 functional magnetic resonance imaging studies, which consisted of 51 acupuncture and 10 tactile stimulation experiments, were selected for the meta-analysis. Following acupuncture needle stimulation, activation in the sensorimotor cortical network, including the insula, thalamus, anterior cingulate cortex, and primary and secondary somatosensory cortices, and deactivation in the limbic-paralimbic neocortical network, including the medial prefrontal cortex, caudate, amygdala, posterior cingulate cortex, and parahippocampus, were detected and assessed. Following control tactile stimulation, weaker patterns of brain responses were detected in areas similar to those stated above. The activation and deactivation patterns following acupuncture stimulation suggest that the hemodynamic responses in the brain simultaneously reflect the sensory, cognitive, and affective dimensions of pain.\n\n\nPERSPECTIVE\nThis article facilitates a better understanding of acupuncture needle stimulation and its effects on specific activity changes in different brain regions as well as its relationship to the multiple dimensions of pain. Future studies can build on this meta-analysis and will help to elucidate the clinically relevant therapeutic effects of acupuncture.",
"title": ""
},
{
"docid": "feeb5741fae619a37f44eae46169e9d1",
"text": "A 24-GHz novel active quasi-circulator is developed in TSMC 0.18-µm CMOS. We proposed a new architecture by using the canceling mechanism to achieve high isolations and reduce the circuit area. The measured insertion losses |S<inf>32</inf>| and |S<inf>21</inf>| are 9 and 8.5 dB, respectively. The isolation |S<inf>31</inf>| is greater than 30 dB. The dc power consumption is only 9.12 mW with a chip size of 0.35 mm<sup>2</sup>.",
"title": ""
},
{
"docid": "bf6d56c2fd716802b8e2d023f86a4225",
"text": "This is the first case report to demonstrate the efficacy of immersive computer-generated virtual reality (VR) and mixed reality (touching real objects which patients also saw in VR) for the treatment of spider phobia. The subject was a 37-yr-old female with severe and incapacitating fear of spiders. Twelve weekly 1-hr sessions were conducted over a 3-month period. Outcome was assessed on measures of anxiety, avoidance, and changes in behavior toward real spiders. VR graded exposure therapy was successful for reducing fear of spiders providing converging evidence for a growing literature showing the effectiveness of VR as a new medium for exposure therapy.",
"title": ""
}
] |
scidocsrr
|
16a707893095f361f70f43871bf7d077
|
DeepCredit: Exploiting User Cickstream for Loan Risk Prediction in P2P Lending
|
[
{
"docid": "ad0688b0c80cf6eeed13a2a9b112f97c",
"text": "P2P lending is an emerging Internet-based application where individuals can directly borrow money from each other. The past decade has witnessed the rapid development and prevalence of online P2P lending platforms, examples of which include Prosper, LendingClub, and Kiva. Meanwhile, extensive research has been done that mainly focuses on the studies of platform mechanisms and transaction data. In this article, we provide a comprehensive survey on the research about P2P lending, which, to the best of our knowledge, is the first focused effort in this field. Specifically, we first provide a systematic taxonomy for P2P lending by summarizing different types of mainstream platforms and comparing their working mechanisms in detail. Then, we review and organize the recent advances on P2P lending from various perspectives (e.g., economics and sociology perspective, and data-driven perspective). Finally, we propose our opinions on the prospects of P2P lending and suggest some future research directions in this field. Meanwhile, throughout this paper, some analysis on real-world data collected from Prosper and Kiva are also conducted.",
"title": ""
},
{
"docid": "fb223abb83654f316da33d9c97f3173f",
"text": "Online peer-to-peer (P2P) lending services are a new type of social platform that enables individuals borrow and lend money directly from one to another. In this paper, we study the dynamics of bidding behavior in a P2P loan auction website, Prosper.com. We investigate the change of various attributes of loan requesting listings over time, such as the interest rate and the number of bids. We observe that there is herding behavior during bidding, and for most of the listings, the numbers of bids they receive reach spikes at very similar time points. We explain these phenomena by showing that there are economic and social factors that lenders take into account when deciding to bid on a listing. We also observe that the profits the lenders make are tied with their bidding preferences. Finally, we build a model based on the temporal progression of the bidding, that reliably predicts the success of a loan request listing, as well as whether a loan will be paid back or not.",
"title": ""
}
] |
[
{
"docid": "bf9ba92f1c7aa2ae4ed32dd270552eb0",
"text": "Video-based person re-identification (re-id) is a central application in surveillance systems with significant concern in security. Matching persons across disjoint camera views in their video fragments is inherently challenging due to the large visual variations and uncontrolled frame rates. There are two steps crucial to person re-id, namely discriminative feature learning and metric learning. However, existing approaches consider the two steps independently, and they do not make full use of the temporal and spatial information in videos. In this paper, we propose a Siamese attention architecture that jointly learns spatiotemporal video representations and their similarity metrics. The network extracts local convolutional features from regions of each frame, and enhance their discriminative capability by focusing on distinct regions when measuring the similarity with another pedestrian video. The attention mechanism is embedded into spatial gated recurrent units to selectively propagate relevant features and memorize their spatial dependencies through the network. The model essentially learns which parts (where) from which frames (when) are relevant and distinctive for matching persons and attaches higher importance therein. The proposed Siamese model is end-to-end trainable to jointly learn comparable hidden representations for paired pedestrian videos and their similarity value. Extensive experiments on three benchmark datasets show the effectiveness of each component of the proposed deep network while outperforming state-of-the-art methods.",
"title": ""
},
{
"docid": "2bfeadedeb38d1a923779f036b305906",
"text": "A monolithic high-resolution (individual pixel size 300times300 mum2) active matrix (AM) programmed 8times8 micro-LED array was fabricated using flip-chip technology. The display was composed of an AM panel and a LED microarray. The AM panel included driving circuits composed of p-type MOS transistors for each pixel. The n-electrodes of the LED pixels in the microarray were connected together, and the p-electrodes were connected to individual outputs of the driving circuits on the AM panel. Using flip-chip technology, the LED microarray was then flipped onto the AM panel to create a microdisplay.",
"title": ""
},
{
"docid": "27d7f7935c235a3631fba6e3df08f623",
"text": "We investigate the task of Named Entity Recognition (NER) in the domain of biomedical text. There is little published work employing modern neural network techniques in this domain, probably due to the small sizes of human-labeled data sets, as non-trivial neural models would have great difficulty avoiding overfitting. In this work we follow a semi-supervised learning approach: We first train state-of-the art (deep) neural networks on a large corpus of noisy machine-labeled data, then “transfer” and fine-tune the learned model on two higher-quality humanlabeled data sets. This approach yields higher performance than the current best published systems for the class DISEASE. It trails but is not far from the currently best systems for the class CHEM.",
"title": ""
},
{
"docid": "7437f0c8549cb8f73f352f8043a80d19",
"text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.",
"title": ""
},
{
"docid": "065ca3deb8cb266f741feb67e404acb5",
"text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet",
"title": ""
},
{
"docid": "8b15435562b287eb97a6c573222797ec",
"text": "Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable reconstruction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios.",
"title": ""
},
{
"docid": "5a5b30b63944b92b168de7c17d5cdc5e",
"text": "We introduce the Densely Segmented Supermarket (D2S) dataset, a novel benchmark for instance-aware semantic segmentation in an industrial domain. It contains 21 000 high-resolution images with pixel-wise labels of all object instances. The objects comprise groceries and everyday products from 60 categories. The benchmark is designed such that it resembles the real-world setting of an automatic checkout, inventory, or warehouse system. The training images only contain objects of a single class on a homogeneous background, while the validation and test sets are much more complex and diverse. To further benchmark the robustness of instance segmentation methods, the scenes are acquired with different lightings, rotations, and backgrounds. We ensure that there are no ambiguities in the labels and that every instance is labeled comprehensively. The annotations are pixel-precise and allow using crops of single instances for articial data augmentation. The dataset covers several challenges highly relevant in the field, such as a limited amount of training data and a high diversity in the test and validation sets. The evaluation of state-of-the-art object detection and instance segmentation methods on D2S reveals significant room for improvement.",
"title": ""
},
{
"docid": "1f6d0e820b169d13e961b672b75bde71",
"text": "Prenatal stress can cause long-term effects on cognitive functions in offspring. Hippocampal synaptic plasticity, believed to be the mechanism underlying certain types of learning and memory, and known to be sensitive to behavioral stress, can be changed by prenatal stress. Whether enriched environment treatment (EE) in early postnatal periods can cause a recovery from these deficits is unknown. Experimental animals were Wistar rats. Prenatal stress was evoked by 10 foot shocks (0.8 mA for 1s, 2-3 min apart) in 30 min per day at gestational day 13-19. After weaning at postnatal day 22, experimental offspring were given the enriched environment treatment through all experiments until tested (older than 52 days age). Electrophysiological and Morris water maze testing was performed at 8 weeks of age. The results showed that prenatal stress impaired long-term potentiation (LTP) but facilitated long-term depression (LTD) in the hippocampal CA1 region in the slices. Furthermore, prenatal stress exacerbated the effects of acute stress on hippocampal LTP and LTD, and also impaired spatial learning and memory in the Morris water maze. However, all these deficits induced by prenatal stress were recovered by enriched environment treatment. This work observes a phenomenon that may contribute to the understanding of clinically important interactions among cognitive deficit, prenatal stress and enriched environment treatment. Enriched environment treatment on early postnatal periods may be one potentially important target for therapeutic interventions in preventing the prenatal stress-induced cognitive disorders.",
"title": ""
},
{
"docid": "c9df206d8c0bc671f3109c1c7b12b149",
"text": "Internet of Things (IoT) — a unified network of physical objects that can change the parameters of the environment or their own, gather information and transmit it to other devices. It is emerging as the third wave in the development of the internet. This technology will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. The IoT is enabled by the latest developments, smart sensors, communication technologies, and Internet protocols. This article contains a description of lnternet of things (IoT) networks. Much attention is given to prospects for future of using IoT and it's development. Some problems of development IoT are were noted. The article also gives valuable information on building(construction) IoT systems based on PLC technology.",
"title": ""
},
{
"docid": "61b89a2be8b2acc34342dfcc0249f4d5",
"text": "Transfer-learning and meta-learning are two effective methods to apply knowledge learned from large data sources to new tasks. In few-class, few-shot target task settings (i.e. when there are only a few classes and training examples available in the target task), meta-learning approaches that optimize for future task learning have outperformed the typical transfer approach of initializing model weights from a pre-trained starting point. But as we experimentally show, meta-learning algorithms that work well in the few-class setting do not generalize well in many-shot and many-class cases. In this paper, we propose a joint training approach that combines both transfer-learning and meta-learning. Benefiting from the advantages of each, our method obtains improved generalization performance on unseen target tasks in both fewand many-class and fewand manyshot scenarios.",
"title": ""
},
{
"docid": "3a3a2261e1063770a9ccbd0d594aa561",
"text": "This paper describes an advanced care and alert portable telemedical monitor (AMON), a wearable medical monitoring and alert system targeting high-risk cardiac/respiratory patients. The system includes continuous collection and evaluation of multiple vital signs, intelligent multiparameter medical emergency detection, and a cellular connection to a medical center. By integrating the whole system in an unobtrusive, wrist-worn enclosure and applying aggressive low-power design techniques, continuous long-term monitoring can be performed without interfering with the patients' everyday activities and without restricting their mobility. In the first two and a half years of this EU IST sponsored project, the AMON consortium has designed, implemented, and tested the described wrist-worn device, a communication link, and a comprehensive medical center software package. The performance of the system has been validated by a medical study with a set of 33 subjects. The paper describes the main concepts behind the AMON system and presents details of the individual subsystems and solutions as well as the results of the medical validation.",
"title": ""
},
{
"docid": "1bef1c66ac1e6e052f5751b11808d9d6",
"text": "There is a growing trend towards attacks on database privacy due to great value of privacy information stored in big data set. Public's privacy are under threats as adversaries are continuously cracking their popular targets such as bank accounts. We find a fact that existing models such as K-anonymity, group records based on quasi-identifiers, which harms the data utility a lot. Motivated by this, we propose a sensitive attribute-based privacy model. Our model is the early work of grouping records based on sensitive attributes instead of quasi-identifiers which is popular in existing models. Random shuffle is used to maximize information entropy inside a group while the marginal distribution maintains the same before and after shuffling, therefore, our method maintains a better data utility than existing models. We have conducted extensive experiments which confirm that our model can achieve a satisfying privacy level without sacrificing data utility while guarantee a higher efficiency.",
"title": ""
},
{
"docid": "14dd650afb3dae58ffb1a798e065825a",
"text": "Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host’s kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host’s performance. Copilot requires no modifications to the protected host’s software and can be expected to operate correctly even when the host kernel is thoroughly compromised – an advantage over traditional monitors designed to run on the host itself.",
"title": ""
},
{
"docid": "da6771ebd128ce1dc58f2ab1d56b065f",
"text": "We present a method for the automatic classification of text documents into a dynamically defined set of topics of interest. The proposed approach requires only a domain ontology and a set of user-defined classification topics, specified as contexts in the ontology. Our method is based on measuring the semantic similarity of the thematic graph created from a text document and the ontology sub-graphs resulting from the projection of the defined contexts. The domain ontology effectively becomes the classifier, where classification topics are expressed using the defined ontological contexts. In contrast to the traditional supervised categorization methods, the proposed method does not require a training set of documents. More importantly, our approach allows dynamically changing the classification topics without retraining of the classifier. In our experiments, we used the English language Wikipedia converted to an RDF ontology to categorize a corpus of current Web news documents into selection of topics of interest. The high accuracy achieved in our tests demonstrates the effectiveness of the proposed method, as well as the applicability of Wikipedia for semantic text categorization purposes.",
"title": ""
},
{
"docid": "a32d6897d74397f5874cc116221af207",
"text": "A plausible definition of “reasoning” could be “algebraically manipulating previously acquired knowledge in order to answer a new question”. This definition covers first-order logical inference or probabilistic inference. It also includes much simpler manipulations commonly used to build large learning systems. For instance, we can build an optical character recognition system by first training a character segmenter, an isolated character recognizer, and a language model, using appropriate labelled training sets. Adequately concatenating these modules and fine tuning the resulting system can be viewed as an algebraic operation in a space of models. The resulting model answers a new question, that is, converting the image of a text page into a computer readable text. This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated “all-purpose” inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.",
"title": ""
},
{
"docid": "f39abb67a6c392369c5618f5c33d93cf",
"text": "In our research, we view human behavior as a structured sequence of context-sensitive decisions. We develop a conditional probabilistic model for predicting human decisions given the contextual situation. Our approach employs the principle of maximum entropy within the Markov Decision Process framework. Modeling human behavior is reduced to recovering a context-sensitive utility function that explains demonstrated behavior within the probabilistic model. In this work, we review the development of our probabilistic model (Ziebart et al. 2008a) and the results of its application to modeling the context-sensitive route preferences of drivers (Ziebart et al. 2008b). We additionally expand the approach’s applicability to domains with stochastic dynamics, present preliminary experiments on modeling time-usage, and discuss remaining challenges for applying our approach to other human behavior modeling problems.",
"title": ""
},
{
"docid": "bdfc21b5ae86711f093806b976258d33",
"text": "A generic and robust approach for the detection of road vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present a novel approach to the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers (a disjunctive set of cascades). Our approach facilitates the realtime detection of both static and moving vehicles independent of orientation, colour, type and configuration. The results presented show the successful detection of differing vehicle types under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. The technique is realised on aerial imagery obtained at 1Hz from an optical camera on the medium UAV B-MAV platform with results presented to include those from the MoD Grand Challenge 2008.",
"title": ""
},
{
"docid": "396ce5ec8ef03a55ed022c4b580531bb",
"text": "BACKGROUND\nThe aim of this study was to evaluate if the presence of a bovine aortic arch (BAA)- the most common aortic arch anomaly-influences the location of the primary entry tear, the surgical procedure, and the outcome of patients undergoing operation for type A acute aortic dissection (AAD).\n\n\nMETHODS\nA total of 157 patients underwent emergency operations because of AAD (71% men, mean age 59.5 ± 13 years). Preoperative computed tomographic scans were screened for the presence of BAA. Patients were separated into 2 groups: presenting with BAA (BAA+, n = 22) or not (BAA-, n = 135). Location of the primary tear, surgical treatment, outcome, and risk factors for postoperative neurologic injury and in-hospital mortality were analyzed.\n\n\nRESULTS\nFourteen percent (22 of 157) of all patients operated on for AAD had a concomitant BAA. Location of the primary entry tear was predominantly in the aortic arch in patients with BAA (BAA+, 59.1% versus BAA-, 13.3%; p < 0.001). Multivariate analysis revealed the presence of a BAA to be an independent risk factor for having the primary tear in the aortic arch (odds ratio [OR], 14.79; 95% confidence interval [CI] 4.54-48.13; p < 0.001) but not for in-hospital mortality. Patients with BAA had a higher rate of postoperative neurologic injury (BAA+, 35% versus BAA-, 7.9%; p = 0.004). Multivariate analysis identified the presence of BAA as an independent risk factor for postoperative neurologic injury (OR, 4.9; 95% CI, 1.635-14.734; p = 0.005).\n\n\nCONCLUSIONS\nIn type A AAD, the presence of a BAA predicts the location of the primary entry site in the aortic arch and is an independent risk factor for a poor neurologic outcome.",
"title": ""
},
{
"docid": "0580342f7efb379fc417d2e5e48c4b73",
"text": "The use of System Dynamics Modeling in Supply Chain Management has only recently re-emerged after a lengthy slack period. Current research on System Dynamics Modelling in supply chain management focuses on inventory decision and policy development, time compression, demand amplification, supply chain design and integration, and international supply chain management. The paper first gives an overview of recent research work in these areas, followed by a discussion of research issues that have evolved, and presents a taxonomy of research and development in System Dynamics Modelling in supply chain management.",
"title": ""
}
] |
scidocsrr
|
acc8ee963ac07519f2056794fab5eb44
|
An axiomatic approach for result diversification
|
[
{
"docid": "c0c7752c6b9416e281c3649e70f9daae",
"text": "Although the study of clustering is centered around an intuitively compelling goal, it has been very difficult to develop a unified framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research community. Here we suggest a formal perspective on the difficulty in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs at work in well-studied clustering techniques such as single-linkage, sum-of-pairs, k-means, and k-median.",
"title": ""
}
] |
[
{
"docid": "de08442e673ba8ca91244fedb020796c",
"text": "The differences between the fields of Human-Computer Interaction and Security (HCISec) and Human-Computer Interaction (HCI) have not been investigated very closely. Many HCI methods and procedures have been adopted by HCISec researchers, however the extent to which these apply to the field of HCISec is arguable given the fine balance between improving the ease of use of a secure system and potentially weakening its security. That is to say that the techniques prevalent in HCI are aimed at improving users' effectiveness, efficiency or satisfaction, but they do not take into account the potential threats and vulnerabilities that they can introduce. To address this problem, we propose a security and usability threat model detailing the different factors that are pertinent to the security and usability of secure systems, together with a process for assessing these.",
"title": ""
},
{
"docid": "42984b6e288bb144619d01ba37bfce68",
"text": "Reinforcement learning has steadily improved and outperform human in lots of traditional games since the resurgence of deep neural network. However, these success is not easy to be copied to autonomous driving because the state spaces in real world are extreme complex and action spaces are continuous and fine control is required. Moreover, the autonomous driving vehicles must also keep functional safety under the complex environments. To deal with these challenges, we first adopt the deep deterministic policy gradient (DDPG) algorithm, which has the capacity to handle complex state and action spaces in continuous domain. We then choose The Open Racing Car Simulator (TORCS) as our environment to avoid physical damage. Meanwhile, we select a set of appropriate sensor information from TORCS and design our own rewarder. In order to fit DDPG algorithm to TORCS, we design our network architecture for both actor and critic inside DDPG paradigm. To demonstrate the effectiveness of our model, We evaluate on different modes in TORCS and show both quantitative and qualitative results.",
"title": ""
},
{
"docid": "002abd54753db9928d8e6832d3358084",
"text": "State-of-the-art semantic role labelling systems require large annotated corpora to achieve full performance. Unfortunately, such corpora are expensive to produce and often do not generalize well across domains. Even in domain, errors are often made where syntactic information does not provide sufficient cues. In this paper, we mitigate both of these problems by employing distributional word representations gathered from unlabelled data. While straight-forward word representations of predicates and arguments improve performance, we show that further gains are achieved by composing representations that model the interaction between predicate and argument, and capture full argument spans.",
"title": ""
},
{
"docid": "06a10608b51cc1ae6c7ef653faf637a9",
"text": "WE aLL KnoW how to protect our private or most valuable data from unauthorized access: encrypt it. When a piece of data M is encrypted under a key K to yield a ciphertext C=EncK(M), only the intended recipient (who knows the corresponding secret decryption key S) will be able to invert the encryption function and recover the original plaintext using the decryption algorithm DecS(C)=DecS(EncK(M))=M. Encryption today—in both symmetric (where S=K) and public key versions (where S remains secret even when K is made publicly available)—is widely used to achieve confidentiality in many important and well-known applications: online banking, electronic shopping, and virtual private networks are just a few of the most common applications using encryption, typically as part of a larger protocol, like the TLS protocol used to secure communication over the Internet. Still, the use of encryption to protect valuable or sensitive data can be very limiting and inflexible. Once the data M is encrypted, the corresponding ciphertext C behaves to a large extent as a black box: all we can do with the box is keep it closed or opened in order to access and operate on the data. In many situations this may be exactly what we want. For example, take a remote storage system, where we want to store a large collection of documents or data files. We store the data in encrypted form, and when we want to access a specific piece of data, we retrieve the corresponding ciphertext, decrypting it locally on our own trusted computer. But as soon as we go beyond the simple data storage/ retrieval model, we are in trouble. Say we want the remote system to provide a more complex functionality, like a database system capable of indexing and searching our data, or answering complex relational or semistructured queries. Using standard encryption technology we are immediately faced with a dilemma: either we store our data unencrypted and reveal our precious or sensitive data to the storage/ database service provider, or we encrypt it and make it impossible for the provider to operate on it. If data is encrypted, then answering even a simple counting query (for example, the number of records or files that contain a certain keyword) would typically require downloading and decrypting the entire database content. Homomorphic encryption is a special kind of encryption that allows operating on ciphertexts without decrypting them; in fact, without even knowing the decryption key. For example, given ciphertexts C=EncK(M) and C'=EncK(M'), an additively homomorphic encryption scheme would allow to combine C and C' to obtain EncK(M+M'). Such encryption schemes are immensely useful in the design of complex cryptographic protocols. For example, an electronic voting scheme may collect encrypted votes Ci=EncK(Mi) where each vote Mi is either 0 or 1, and then tally them to obtain the encryption of the outcome C=EncK(M1+..+Mn). This would be decrypted by an appropriate authority that has the decryption key and ability to announce the result, but the entire collection and tallying process would operate on encrypted data without the use of the secret key. (Of course, this is an oversimplified protocol, as many other issues must be addressed in a real election scheme, but it well illustrates the potential usefulness of homomorphic encryption.) To date, all known homomorphic encryption schemes supported essentially only one basic operation, for example, addition. But the potential of fully homomorphic encryption (that is, homomorphic encryption supporting arbitrarily complex computations on ciphertexts) is clear. Think of encrypting your queries before you send them to your favorite search engine, and receive the encryption of the result without the search engine even knowing what the query was. Imagine running your most computationally intensive programs on your large datasets on a cluster of remote computers, as in a cloud computing environment, while keeping both your programs, data, and results encrypted and confidential. The idea of fully homomorphic encryption schemes was first proposed by Rivest, Adleman, and Dertouzos the late 1970s, but remained a mirage for three decades, the never-to-be-found Holy Grail of cryptography. At least until 2008, when Craig Gentry announced a new approach to the construction of fully homomorphic cryptosystems. In the following paper, Gentry describes his innovative method for constructing fully homomorphic encryption schemes, the first credible solution to this long-standing major problem in cryptography and theoretical computer science at large. While much work is still to be done before fully homomorphic encryption can be used in practice, Gentry’s work is clearly a landmark achievement. Before Gentry’s discovery many members of the cryptography research community thought fully homomorphic encryption was impossible to achieve. Now, most cryptographers (me among them) are convinced the Holy Grail exists. In fact, there must be several of them, more or less efficient ones, all out there waiting to be discovered. Gentry gives a very accessible and enjoyable description of his general method to achieve fully homomorphic encryption as well as a possible instantiation of his framework recently proposed by van Dijik, Gentry, Halevi, and Vaikuntanathan. He has taken great care to explain his technically complex results, some of which have their roots in lattice-based cryptography, using a metaphorical tale of a jeweler and her quest to keep her precious materials safe, while at the same time allowing her employees to work on them. Gentry’s homomorphic encryption work is truly worth a read.",
"title": ""
},
{
"docid": "5c74d0cfcbeaebc29cdb58a30436556a",
"text": "Modular decomposition is an effective means to achieve a complex system, but that of current part-component-based does not meet the needs of the positive development of the production. Design Structure Matrix (DSM) can simultaneously reflect the sequence, iteration, and feedback information, and express the parallel, sequential, and coupled relationship between DSM elements. This article, a modular decomposition method, named Design Structure Matrix Clustering modularize method, is proposed, concerned procedures are define, based on sorting calculate and clustering analysis of DSM, according to the rules of rows exchanges and columns exchange with the same serial number. The purpose and effectiveness of DSM clustering modularize method are confirmed through case study of assembly and calibration system for the large equipment.",
"title": ""
},
{
"docid": "8fe5ad58edf4a1c468fd0b6a303729ee",
"text": "Das CDISC Operational Data Model (ODM) ist ein populärer Standard in klinischen Datenmanagementsystemen (CDMS). Er beschreibt sowohl die Struktur einer klinischen Prüfung inklusive der Visiten, Formulare, Datenele mente und Codelisten als auch administrative Informationen wie gültige Nutzeracco unts. Ferner enthält er alle erhobenen klinischen Fakten über die Pro banden. Sein originärer Einsatzzweck liegt in der Archivierung von Studiendatenbanken und dem Austausch klinischer Daten zwischen verschiedenen CDMS. Aufgrund de r reichhaltigen Struktur eignet er sich aber auch für weiterführende Anwendungsfälle. Im Rahmen studentischer Praktika wurden verschied ene Szenarien für funktionale Ergänzungen des freien CDMS OpenClinica unters ucht und implementiert, darunter die Generierung eines Annotated CRF, der Import vo n Studiendaten per Web-Service, das semiautomatisierte Anlegen von Studien so wie der Export von Studiendaten in einen relationalen Data Mart und in ein Forschungs-Data-Warehouse auf Basis von i2b2.",
"title": ""
},
{
"docid": "7ce9ef05d3f4a92f6b187d7986b70be1",
"text": "With the growth in the consumer electronics industry, it is vital to develop an algorithm for ultrahigh definition products that is more effective and has lower time complexity. Image interpolation, which is based on an autoregressive model, has achieved significant improvements compared with the traditional algorithm with respect to image reconstruction, including a better peak signal-to-noise ratio (PSNR) and improved subjective visual quality of the reconstructed image. However, the time-consuming computation involved has become a bottleneck in those autoregressive algorithms. Because of the high time cost, image autoregressive-based interpolation algorithms are rarely used in industry for actual production. In this study, in order to meet the requirements of real-time reconstruction, we use diverse compute unified device architecture (CUDA) optimization strategies to make full use of the graphics processing unit (GPU) (NVIDIA Tesla K80), including a shared memory and register and multi-GPU optimization. To be more suitable for the GPU-parallel optimization, we modify the training window to obtain a more concise matrix operation. Experimental results show that, while maintaining a high PSNR and subjective visual quality and taking into account the I/O transfer time, our algorithm achieves a high speedup of 147.3 times for a Lena image and 174.8 times for a 720p video, compared to the original single-threaded C CPU code with -O2 compiling optimization.",
"title": ""
},
{
"docid": "e7bedfa690b456a7a93e5bdae8fff79c",
"text": "During the past several years, there have been a significant number of researches conducted in the area of semiconductor final test scheduling problems (SFTSP). As specific example of simultaneous multiple resources scheduling problem (SMRSP), intelligent manufacturing planning and scheduling based on meta-heuristic methods, such as Genetic Algorithm (GA), Simulated Annealing (SA), and Particle Swarm Optimization (PSO), have become the common tools for finding satisfactory solutions within reasonable computational times in real settings. However, limited researches were aiming at analyze the effects of interdependent relations during group decision-making activities. Moreover for complex and large problems, local constraints and objectives from each managerial entity, and their contributions towards the global objectives cannot be effectively represented in a single model. In this paper, we propose a novel Cooperative Estimation of Distribution Algorithm (CEDA) to overcome the challenges mentioned before. The CEDA is established based on divide-and-conquer strategy and a co-evolutionary framework. Considerable experiments have been conducted and the results confirmed that CEDA outperforms recent research results for scheduling problems in FMS (Flexible Manufacturing Systems).",
"title": ""
},
{
"docid": "628a4f05cc6c39585bcca8d5f503f277",
"text": "Recent studies have linked atmospheric particulate matter with human health problems. In many urban areas, mobile sources are a major source of particulate matter (PM) and the dominant source of fine particles or PM2.5 (PM smaller than 2.5 pm in aerodynamic diameter). Dynamometer studies have implicated diesel engines as being a significant source of ultrafine particles (< 0.1 microm), which may also exhibit deleterious health impacts. In addition to direct tailpipe emissions, mobile sources contribute to ambient particulate levels by brake and tire wear and by resuspension of particles from pavement. Information about particle emission rates, size distributions, and chemical composition from in-use light-duty (LD) and heavy-duty (HD) vehicles is scarce, especially under real-world operating conditions. To characterize particulate emissions from a limited set of in-use vehicles, we studied on-road emissions from vehicles operating under hot-stabilized conditions, at relatively constant speed, in the Tuscarora Mountain Tunnel along the Pennsylvania Turnpike from May 18 through 23, 1999. There were five specific aims of the study. (1) obtain chemically speciated diesel profiles for the source apportionment of diesel versus other ambient constituents in the air and to determine the chemical species present in real-world diesel emissions; (2) measure particle number and size distribution of chemically speciated particles in the atmosphere; (3) identify, by reference to data in years past, how much change has occurred in diesel exhaust particulate mass; (4) measure particulate emissions from LD gasoline vehicles to determine their contribution to the observed particle levels compared to diesels; and (5) determine changes over time in gas phase emissions by comparing our results with those of previous studies. Comparing the results of this study with our 1992 results, we found that emissions of C8 to C20 hydrocarbons, carbon monoxide (CO), and carbon dioxide (CO2) from HD diesel emissions substantially decreased over the seven-year period. Particulate mass emissions showed a similar trend. Considering a 25-year period, we observed a continued downward trend in HD particulate emissions from approximately 1,100 mg/km in 1974 to 132 mg/km (reported as PM2.5) in this study. The LD particle emission factor was considerably less than the HD value, but given the large fraction of LD vehicles, emissions from this source cannot be ignored. Results of the current study also indicate that both HD and LD vehicles emit ultrafine particles and that these particles are preserved under real-world dilution conditions. Particle number distributions were dominated by ultrafine particles with count mean diameters of 17 to 13 nm depending on fleet composition. These particles appear to be primarily composed of sulfur, indicative of sulfuric acid emission and nucleation. Comparing the 1992 and 1999 HD emission rates, we observed a 48% increase in the NOx/CO2 emissions ratio. This finding supports the assumption that many new-technology diesel engines conserve fuel but increase NOx emissions.",
"title": ""
},
{
"docid": "929c3c0bd01056851952660ffd90673a",
"text": "SUMMARY: The Food and Drug Administration (FDA) is issuing this proposed rule to amend the 1994 tentative final monograph or proposed rule (the 1994 TFM) for over-the-counter (OTC) antiseptic drug products. In this proposed rule, we are proposing to establish conditions under which OTC antiseptic products intended for use by health care professionals in a hospital setting or other health care situations outside the hospital are generally recognized as safe and effective. In the 1994 TFM, certain antiseptic active ingredients were proposed as being generally recognized as safe for use in health care settings based on safety data evaluated by FDA as part of its ongoing review of OTC antiseptic drug products. However, in light of more recent scientific developments, we are now proposing that additional safety data are necessary to support the safety of antiseptic active ingredients for these uses. We also are proposing that all health care antiseptic active ingredients have in vitro data characterizing the ingredient's antimicrobial properties and in vivo clinical simulation studies showing that specified log reductions in the amount of certain bacteria are achieved using the ingredient. DATES: Submit electronic or written comments by October 28, 2015. See section VIII of this document for the proposed effective date of a final rule based on this proposed rule. ADDRESSES: You may submit comments by any of the following methods: Electronic Submissions Submit electronic comments in the following way: • Federal eRulemaking Portal: http:// www.regulations.gov. Follow the instructions for submitting comments.",
"title": ""
},
{
"docid": "4cfc991f626f6fc9d131514985863127",
"text": "Simultaneous recordings from large neural populations are becoming increasingly common. An important feature of population activity is the trial-to-trial correlated fluctuation of spike train outputs from recorded neuron pairs. Similar to the firing rate of single neurons, correlated activity can be modulated by a number of factors, from changes in arousal and attentional state to learning and task engagement. However, the physiological mechanisms that underlie these changes are not fully understood. We review recent theoretical results that identify three separate mechanisms that modulate spike train correlations: changes in input correlations, internal fluctuations and the transfer function of single neurons. We first examine these mechanisms in feedforward pathways and then show how the same approach can explain the modulation of correlations in recurrent networks. Such mechanistic constraints on the modulation of population activity will be important in statistical analyses of high-dimensional neural data.",
"title": ""
},
{
"docid": "b791d4e531f893e529595110d0331822",
"text": "Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data.",
"title": ""
},
{
"docid": "e5e6f213762e3c89f536a0ea2fc554f8",
"text": "New and emerging terahertz technology applications make this a very exciting time for the scientists, engineers, and technologists in the field. New sensors and detectors have been the primary driving force behind the unprecedented progress in terahertz technology, but in the last decade extraordinary developments in terahertz sources have also occurred. Driven primarily by space based missions for Earth, planetary, and astrophysical science, frequency multiplied sources have dominated the field in recent years, at least in the 2-3 THz frequency range. More recently, over the past few years terahertz quantum cascade lasers (QCLs) have made tremendous strides, finding increasing applications in terahertz systems. Vacuum electronic devices and photonic sources are not far behind either. In this article, the various technologies for terahertz sources are reviewed, and future trends are discussed.",
"title": ""
},
{
"docid": "89a73876c24508d92050f2055292d641",
"text": "We study the fundamental problem of computing distances between nodes in large graphs such as the web graph and social networks. Our objective is to be able to answer distance queries between pairs of nodes in real time. Since the standard shortest path algorithms are expensive, our approach moves the time-consuming shortest-path computation offline, and at query time only looks up precomputed values and performs simple and fast computations on these precomputed values. More specifically, during the offline phase we compute and store a small \"sketch\" for each node in the graph, and at query-time we look up the sketches of the source and destination nodes and perform a simple computation using these two sketches to estimate the distance.",
"title": ""
},
{
"docid": "1cbdf72cbb83763040abedb74748f6cd",
"text": "Cyber attack is one of the most rapidly growing threats to the world of cutting edge information technology. As new tools and techniques are emerging everyday to make information accessible over the Internet, so is their vulnerabilities. Cyber defense is inevitable in order to ensure reliable and secure communication and transmission of information. Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) are the major technologies dominating in the area of cyber defense. Tremendous efforts have already been put in intrusion detection research for decades but intrusion prevention research is still in its infancy. This paper provides a comprehensive review of the current research in both Intrusion Detection Systems and recently emerged Intrusion Prevention Systems. Limitations of current research works in both fields are also discussed in conclusion.",
"title": ""
},
{
"docid": "d4641f30306d5e653da94ccdeec2239c",
"text": "Terpenes are economically and ecologically important phytochemicals. Their synthesis is controlled by the terpene synthase (TPS) gene family, which is highly diversified throughout the plant kingdom. The plant family Myrtaceae are characterised by especially high terpene concentrations, and considerable variation in terpene profiles. Many Myrtaceae are grown commercially for terpene products including the eucalypts Corymbia and Eucalyptus. Eucalyptus grandis has the largest TPS gene family of plants currently sequenced, which is largely conserved in the closely related E. globulus. However, the TPS gene family has been well studied only in these two eucalypt species. The recent assembly of two Corymbia citriodora subsp. variegata genomes presents an opportunity to examine the conservation of this important gene family across more divergent eucalypt lineages. Manual annotation of the TPS gene family in C. citriodora subsp. variegata revealed a similar overall number, and relative subfamily representation, to that previously reported in E. grandis and E. globulus. Many of the TPS genes were in physical clusters that varied considerably between Eucalyptus and Corymbia, with several instances of translocation, expansion/contraction and loss. Notably, there was greater conservation in the subfamilies involved in primary metabolism than those involved in secondary metabolism, likely reflecting different selective constraints. The variation in cluster size within subfamilies and the broad conservation between the eucalypts in the face of this variation are discussed, highlighting the potential contribution of selection, concerted evolution and stochastic processes. These findings provide the foundation to better understand terpene evolution within the ecologically and economically important Myrtaceae.",
"title": ""
},
{
"docid": "00b13f673d9e6efc1edebf2641204ea6",
"text": "Two studies examined the effects of implicit and explicit priming of aging stereotypes. Implicit primes had a significant effect on older adults' memory, with positive primes associated with greater recall than negative primes. With explicit primes, older adults were able to counteract the impact of negative stereotypes when the cues were relatively subtle, but blatant stereotype primes suppressed performance regardless of prime type. No priming effects under either presentation condition were obtained for younger adults, indicating that the observed implicit effects are specific to those for whom the stereotype is self-relevant. Findings emphasize the importance of social-situational factors in determining older adults' memory performance and contribute to the delineation of situations under which stereotypes are most influential.",
"title": ""
},
{
"docid": "d1069c06341e484e7f3b5ab7a4a49a2d",
"text": "In a \"nutrition transition\", the consumption of foods high in fats and sweeteners is increasing throughout the developing world. The transition, implicated in the rapid rise of obesity and diet-related chronic diseases worldwide, is rooted in the processes of globalization. Globalization affects the nature of agri-food systems, thereby altering the quantity, type, cost and desirability of foods available for consumption. Understanding the links between globalization and the nutrition transition is therefore necessary to help policy makers develop policies, including food policies, for addressing the global burden of chronic disease. While the subject has been much discussed, tracing the specific pathways between globalization and dietary change remains a challenge. To help address this challenge, this paper explores how one of the central mechanisms of globalization, the integration of the global marketplace, is affecting the specific diet patterns. Focusing on middle-income countries, it highlights the importance of three major processes of market integration: (I) production and trade of agricultural goods; (II) foreign direct investment in food processing and retailing; and (III) global food advertising and promotion. The paper reveals how specific policies implemented to advance the globalization agenda account in part for some recent trends in the global diet. Agricultural production and trade policies have enabled more vegetable oil consumption; policies on foreign direct investment have facilitated higher consumption of highly-processed foods, as has global food marketing. These dietary outcomes also reflect the socioeconomic and cultural context in which these policies are operating. An important finding is that the dynamic, competitive forces unleashed as a result of global market integration facilitates not only convergence in consumption habits (as is commonly assumed in the \"Coca-Colonization\" hypothesis), but adaptation to products targeted at different niche markets. This convergence-divergence duality raises the policy concern that globalization will exacerbate uneven dietary development between rich and poor. As high-income groups in developing countries accrue the benefits of a more dynamic marketplace, lower-income groups may well experience convergence towards poor quality obseogenic diets, as observed in western countries. Global economic policies concerning agriculture, trade, investment and marketing affect what the world eats. They are therefore also global food and health policies. Health policy makers should pay greater attention to these policies in order to address some of the structural causes of obesity and diet-related chronic diseases worldwide, especially among the groups of low socioeconomic status.",
"title": ""
},
{
"docid": "c9ff6e6c47b6362aaba5f827dd1b48f2",
"text": "IEC 62056 for upper-layer protocols and IEEE 802.15.4g for communication infrastructure are promising means of advanced metering infrastructure (AMI) in Japan. However, since the characteristics of a communication system based on these combined technologies have yet to be identified, this paper gives the communication failure rates and latency acquired by calculations. In addition, the calculation results suggest some adequate AMI configurations, and show its extensibility in consideration of the usage environment.",
"title": ""
},
{
"docid": "7b314cd0c326cb977b92f4907a0ed737",
"text": "This is the third part of a series of papers that provide a comprehensive survey of the techniques for tracking maneuvering targets without addressing the so-called measurement-origin uncertainty. Part I [1] and Part II [2] deal with general target motion models and ballistic target motion models, respectively. This part surveys measurement models, including measurement model-based techniques, used in target tracking. Models in Cartesian, sensor measurement, their mixed, and other coordinates are covered. The stress is on more recent advances — topics that have received more attention recently are discussed in greater details.",
"title": ""
}
] |
scidocsrr
|
31676b77fc40d569e619caec0dd4fc17
|
A Pan-Cancer Proteogenomic Atlas of PI3K/AKT/mTOR Pathway Alterations.
|
[
{
"docid": "99ff0acb6d1468936ae1620bc26c205f",
"text": "The Cancer Genome Atlas (TCGA) has used the latest sequencing and analysis methods to identify somatic variants across thousands of tumours. Here we present data and analytical results for point mutations and small insertions/deletions from 3,281 tumours across 12 tumour types as part of the TCGA Pan-Cancer effort. We illustrate the distributions of mutation frequencies, types and contexts across tumour types, and establish their links to tissues of origin, environmental/carcinogen influences, and DNA repair defects. Using the integrated data sets, we identified 127 significantly mutated genes from well-known (for example, mitogen-activated protein kinase, phosphatidylinositol-3-OH kinase, Wnt/β-catenin and receptor tyrosine kinase signalling pathways, and cell cycle control) and emerging (for example, histone, histone modification, splicing, metabolism and proteolysis) cellular processes in cancer. The average number of mutations in these significantly mutated genes varies across tumour types; most tumours have two to six, indicating that the number of driver mutations required during oncogenesis is relatively small. Mutations in transcriptional factors/regulators show tissue specificity, whereas histone modifiers are often mutated across several cancer types. Clinical association analysis identifies genes having a significant effect on survival, and investigations of mutations with respect to clonal/subclonal architecture delineate their temporal orders during tumorigenesis. Taken together, these results lay the groundwork for developing new diagnostics and individualizing cancer treatment.",
"title": ""
}
] |
[
{
"docid": "6d00686ad4d2d589a415d810b2fcc876",
"text": "The accuracy of voice activity detection (VAD) is one of the most important factors which influence the capability of the speech recognition system, how to detect the endpoint precisely in noise environment is still a difficult task. In this paper, we proposed a new VAD method based on Mel-frequency cepstral coefficients (MFCC) similarity. We first extracts the MFCC of a voice signal for each frame, followed by calculating the MFCC Euclidean distance and MFCC correlation coefficient of the test frame and the background noise, Finally, give the experimental results. The results show that at low SNR circumstance, MFCC similarity detection method is better than traditional short-term energy method. Compared with Euclidean distance measure method, correlation coefficient is better.",
"title": ""
},
{
"docid": "070d23b78d7808a19bde68f0ccdd7587",
"text": "Deep learning is playing a more and more important role in our daily life and scientific research such as autonomous systems, intelligent life and data mining. However, numerous studies have showed that deep learning with superior performance on many tasks may suffer from subtle perturbations constructed by attacker purposely, called adversarial perturbations, which are imperceptible to human observers but completely effect deep neural network models. The emergence of adversarial attacks has led to questions about neural networks. Therefore, machine learning security and privacy are becoming an increasingly active research area. In this paper, we summarize the prevalent methods for the generating adversarial attacks according to three groups. We elaborated on their ideas and principles of generation. We further analyze the common limitations of these methods and implement statistical experiments of the last layer output on CleverHans to reveal that the detection of adversarial samples is not as difficult as it seems and can be achieved in some relatively simple manners.",
"title": ""
},
{
"docid": "e21aed852a892cbede0a31ad84d50a65",
"text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.09.010 ⇑ Corresponding author. Tel.: +1 662 915 5519. E-mail addresses: [email protected] (C. R (D. Gamboa), [email protected] (F. Glover), [email protected] (C. Osterman). Heuristics for the traveling salesman problem (TSP) have made remarkable advances in recent years. We survey the leading methods and the special components responsible for their successful implementations, together with an experimental analysis of computational tests on a challenging and diverse set of symmetric and asymmetric TSP benchmark problems. The foremost algorithms are represented by two families, deriving from the Lin–Kernighan (LK) method and the stem-and-cycle (S&C) method. We show how these families can be conveniently viewed within a common ejection chain framework which sheds light on their similarities and differences, and gives clues about the nature of potential enhancements to today’s best methods that may provide additional gains in solving large and difficult TSPs. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "47de26ecd5f759afa7361c7eff9e9b25",
"text": "At many teaching hospitals, it is common practice for on-call radiology residents to interpret radiology examinations; such reports are later reviewed and revised by an attending physician before being used for any decision making. In case there are substantial problems in the resident’s initial report, the resident is called and the problems are reviewed to prevent similar future reporting errors. However, due to the large volume of reports produced, attending physicians rarely discuss the problems side by side with residents, thus missing an educational opportunity. In this work, we introduce a pipeline to discriminate between reports with significant discrepancies and those with non-significant discrepancies. The former contain severe errors or mis-interpretations, thus representing a great learning opportunity for the resident; the latter presents only minor differences (often stylistic) and have a minor role in the education of a resident. By discriminating between the two, the proposed system could flag those reports that an attending radiology should definitely review with residents under their supervision. We evaluated our approach on 350 manually annotated radiology reports sampled from a collection of tens of thousands. The proposed classifier achieves an Area Under the Curve (AUC) of 0.837, which represent a 14% improvement over the baselines. Furthermore, the classifier reduces the False Negative Rate (FNR) by 52%, a desirable performance metric for any recall-oriented task such as the one studied",
"title": ""
},
{
"docid": "48485e967c5aa345a53b91b47cc0e6d0",
"text": "The buccinator musculomucosal flaps are actually considered the main reconstructive option for small-moderate defects of the oral mucosa. In this paper we present our experience with the posteriorly based buccinator musculomucosal flap. A retrospective review was performed of all patients who had had a Bozola flap reconstruction at the Operative Unit of Maxillo-Facial Surgery of Parma, Italy, between 2003 and 2010. The Bozola flap was used in 19 patients. In most cases they had defects of the palate (n=12). All flaps were harvested successfully and no major complications occurred. Minor complications were observed in two cases. At the end of the follow up all patients returned to a normal diet without alterations of speech and swallowing. We consider the Bozola flap the first choice for the reconstruction of defects involving the palate, the cheek and the postero-lateral tongue and floor of the mouth.",
"title": ""
},
{
"docid": "d7f743ddff9863b046ab91304b37a667",
"text": "In sensor networks, passive localization can be performed by exploiting the received signals of unknown emitters. In this paper, the Time of Arrival (TOA) measurements are investigated. Often, the unknown time of emission is eliminated by calculating the difference between two TOA measurements where Time Difference of Arrival (TDOA) measurements are obtained. In TOA processing, additionally, the unknown time of emission is to be estimated. Therefore, the target state is extended by the unknown time of emission. A comparison is performed investigating the attainable accuracies for localization based on TDOA and TOA measurements given by the Cramér-Rao Lower Bound (CRLB). Using the Maximum Likelihood estimator, some characteristic features of the cost functions are investigated indicating a better performance of the TOA approach. But counterintuitive, Monte Carlo simulations do not support this indication, but show the comparability of TDOA and TOA localization.",
"title": ""
},
{
"docid": "8a37001733b0ee384277526bd864fe04",
"text": "Miscreants use DDoS botnets to attack a victim via a large number of malware-infected hosts, combining the bandwidth of the individual PCs. Such botnets have thus a high potential to render targeted services unavailable. However, the actual impact of attacks by DDoS botnets has never been evaluated. In this paper, we monitor C&C servers of 14 DirtJumper and Yoddos botnets and record the DDoS targets of these networks. We then aim to evaluate the availability of the DDoS victims, using a variety of measurements such as TCP response times and analyzing the HTTP content. We show that more than 65% of the victims are severely affected by the DDoS attacks, while also a few DDoS attacks likely failed.",
"title": ""
},
{
"docid": "7c1c7eb4f011ace0734dd52759ce077f",
"text": "OBJECTIVES\nTo investigate the treatment effects of bilateral robotic priming combined with the task-oriented approach on motor impairment, disability, daily function, and quality of life in patients with subacute stroke.\n\n\nDESIGN\nA randomized controlled trial.\n\n\nSETTING\nOccupational therapy clinics in medical centers.\n\n\nSUBJECTS\nThirty-one subacute stroke patients were recruited.\n\n\nINTERVENTIONS\nParticipants were randomly assigned to receive bilateral priming combined with the task-oriented approach (i.e., primed group) or to the task-oriented approach alone (i.e., unprimed group) for 90 minutes/day, 5 days/week for 4 weeks. The primed group began with the bilateral priming technique by using a bimanual robot-aided device.\n\n\nMAIN MEASURES\nMotor impairments were assessed by the Fugal-Meyer Assessment, grip strength, and the Box and Block Test. Disability and daily function were measured by the modified Rankin Scale, the Functional Independence Measure, and actigraphy. Quality of life was examined by the Stroke Impact Scale.\n\n\nRESULTS\nThe primed and unprimed groups improved significantly on most outcomes over time. The primed group demonstrated significantly better improvement on the Stroke Impact Scale strength subscale ( p = 0.012) and a trend for greater improvement on the modified Rankin Scale ( p = 0.065) than the unprimed group.\n\n\nCONCLUSION\nBilateral priming combined with the task-oriented approach elicited more improvements in self-reported strength and disability degrees than the task-oriented approach by itself. Further large-scale research with at least 31 participants in each intervention group is suggested to confirm the study findings.",
"title": ""
},
{
"docid": "52c0c6d1deacdca44df5000b2b437c78",
"text": "This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation- maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.",
"title": ""
},
{
"docid": "2e987add43a584bdd0a67800ad28c5f8",
"text": "The bones of elderly people with osteoporosis are susceptible to either traumatic fracture as a result of external impact, such as what happens during a fall, or even spontaneous fracture without trauma as a result of muscle contraction [1, 2]. Understanding the fracture behavior of bone tissue will help researchers find proper treatments to strengthen the bone in order to prevent such fractures, and design better implants to reduce the chance of secondary fracture after receiving the implant.",
"title": ""
},
{
"docid": "863db7439c2117e36cc2a789b557a665",
"text": "A core brain network has been proposed to underlie a number of different processes, including remembering, prospection, navigation, and theory of mind [Buckner, R. L., & Carroll, D. C. Self-projection and the brain. Trends in Cognitive Sciences, 11, 49–57, 2007]. This purported network—medial prefrontal, medial-temporal, and medial and lateral parietal regions—is similar to that observed during default-mode processing and has been argued to represent self-projection [Buckner, R. L., & Carroll, D. C. Self-projection and the brain. Trends in Cognitive Sciences, 11, 49–57, 2007] or scene-construction [Hassabis, D., & Maguire, E. A. Deconstructing episodic memory with construction. Trends in Cognitive Sciences, 11, 299–306, 2007]. To date, no systematic and quantitative demonstration of evidence for this common network has been presented. Using the activation likelihood estimation (ALE) approach, we conducted four separate quantitative meta-analyses of neuroimaging studies on: (a) autobiographical memory, (b) navigation, (c) theory of mind, and (d) default mode. A conjunction analysis between these domains demonstrated a high degree of correspondence. We compared these findings to a separate ALE analysis of prospection studies and found additional correspondence. Across all domains, and consistent with the proposed network, correspondence was found within the medial-temporal lobe, precuneus, posterior cingulate, retrosplenial cortex, and the temporo-parietal junction. Additionally, this study revealed that the core network extends to lateral prefrontal and occipital cortices. Autobiographical memory, prospection, theory of mind, and default mode demonstrated further reliable involvement of the medial prefrontal cortex and lateral temporal cortices. Autobiographical memory and theory of mind, previously studied as distinct, exhibited extensive functional overlap. These findings represent quantitative evidence for a core network underlying a variety of cognitive domains.",
"title": ""
},
{
"docid": "566412870c83e5e44fabc50487b9d994",
"text": "The influence of technology in the field of gambling innovation continues to grow at a rapid pace. After a brief overview of gambling technologies and deregulation issues, this review examines the impact of technology on gambling by highlighting salient factors in the rise of Internet gambling (i.e., accessibility, affordability, anonymity, convenience, escape immersion/dissociation, disinhibition, event frequency, asociability, interactivity, and simulation). The paper also examines other factors in relation to Internet gambling including the relationship between Internet addiction and Internet gambling addiction. The paper ends by overviewing some of the social issues surrounding Internet gambling (i.e., protection of the vulnerable, Internet gambling in the workplace, electronic cash, and unscrupulous operators). Recommendations for Internet gambling operators are also provided.",
"title": ""
},
{
"docid": "28574c82a49b096b11f1b78b5d62e425",
"text": "A major reason for the current reproducibility crisis in the life sciences is the poor implementation of quality control measures and reporting standards. Improvement is needed, especially regarding increasingly complex in vitro methods. Good Cell Culture Practice (GCCP) was an effort from 1996 to 2005 to develop such minimum quality standards also applicable in academia. This paper summarizes recent key developments in in vitro cell culture and addresses the issues resulting for GCCP, e.g. the development of induced pluripotent stem cells (iPSCs) and gene-edited cells. It further deals with human stem-cell-derived models and bioengineering of organo-typic cell cultures, including organoids, organ-on-chip and human-on-chip approaches. Commercial vendors and cell banks have made human primary cells more widely available over the last decade, increasing their use, but also requiring specific guidance as to GCCP. The characterization of cell culture systems including high-content imaging and high-throughput measurement technologies increasingly combined with more complex cell and tissue cultures represent a further challenge for GCCP. The increasing use of gene editing techniques to generate and modify in vitro culture models also requires discussion of its impact on GCCP. International (often varying) legislations and market forces originating from the commercialization of cell and tissue products and technologies are further impacting on the need for the use of GCCP. This report summarizes the recommendations of the second of two workshops, held in Germany in December 2015, aiming map the challenge and organize the process or developing a revised GCCP 2.0.",
"title": ""
},
{
"docid": "59c2e1dcf41843d859287124cc655b05",
"text": "Atherosclerotic cardiovascular disease (ASCVD) is the most common cause of death in most Western countries. Nutrition factors contribute importantly to this high risk for ASCVD. Favourable alterations in diet can reduce six of the nine major risk factors for ASCVD, i.e. high serum LDL-cholesterol levels, high fasting serum triacylglycerol levels, low HDL-cholesterol levels, hypertension, diabetes and obesity. Wholegrain foods may be one the healthiest choices individuals can make to lower the risk for ASCVD. Epidemiological studies indicate that individuals with higher levels (in the highest quintile) of whole-grain intake have a 29 % lower risk for ASCVD than individuals with lower levels (lowest quintile) of whole-grain intake. It is of interest that neither the highest levels of cereal fibre nor the highest levels of refined cereals provide appreciable protection against ASCVD. Generous intake of whole grains also provides protection from development of diabetes and obesity. Diets rich in wholegrain foods tend to decrease serum LDL-cholesterol and triacylglycerol levels as well as blood pressure while increasing serum HDL-cholesterol levels. Whole-grain intake may also favourably alter antioxidant status, serum homocysteine levels, vascular reactivity and the inflammatory state. Whole-grain components that appear to make major contributions to these protective effects are: dietary fibre; vitamins; minerals; antioxidants; phytosterols; other phytochemicals. Three servings of whole grains daily are recommended to provide these health benefits.",
"title": ""
},
{
"docid": "66370e97fba315711708b13e0a1c9600",
"text": "Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.",
"title": ""
},
{
"docid": "a2f65eb4a81bc44ea810d834ab33d891",
"text": "This survey provides the basis for developing research in the area of mobile manipulator performance measurement, an area that has relatively few research articles when compared to other mobile manipulator research areas. The survey provides a literature review of mobile manipulator research with examples of experimental applications. The survey also provides an extensive list of planning and control references as this has been the major research focus for mobile manipulators which factors into performance measurement of the system. The survey then reviews performance metrics considered for mobile robots, robot arms, and mobile manipulators and the systems that measure their performance, including machine tool measurement systems through dynamic motion tracking systems. Lastly, the survey includes a section on research that has occurred for performance measurement of robots, mobile robots, and mobile manipulators beginning with calibration, standards, and mobile manipulator artifacts that are being considered for evaluation of mobile manipulator performance.",
"title": ""
},
{
"docid": "d56807574d6185c6e3cd9a8e277f8006",
"text": "There is a substantial literature on e-government that discusses information and communication technology (ICT) as an instrument for reducing the role of bureaucracy in government organizations. The purpose of this paper is to offer a critical discussion of this literature and to provide a complementary argument, which favors the use of ICT in the public sector to support the operations of bureaucratic organizations. Based on the findings of a case study – of the Venice municipality in Italy – the paper discusses how ICT can be used to support rather than eliminate bureaucracy. Using the concepts of e-bureaucracy and functional simplification and closure, the paper proposes evidence and support for the argument that bureaucracy should be preserved and enhanced where e-government policies are concerned. Functional simplification and closure are very valuable concepts for explaining why this should be a viable approach.",
"title": ""
},
{
"docid": "77bbeb9510f4c9000291910bf06e4a22",
"text": "Traveling Salesman Problem is an important optimization issue of many fields such as transportation, logistics and semiconductor industries and it is about finding a Hamiltonian path with minimum cost. To solve this problem, many researchers have proposed different approaches including metaheuristic methods. Artificial Bee Colony algorithm is a well known swarm based optimization technique. In this paper we propose a new Artificial Bee Colony algorithm called Combinatorial ABC for Traveling Salesman Problem. Simulation results show that this Artificial Bee Colony algorithm can be used for combinatorial optimization problems.",
"title": ""
},
{
"docid": "dbafea1fbab901ff5a53f752f3bfb4b8",
"text": "Three studies were conducted to test the hypothesis that high trait aggressive individuals are more affected by violent media than are low trait aggressive individuals. In Study 1, participants read film descriptions and then chose a film to watch. High trait aggressive individuals were more likely to choose a violent film to watch than were low trait aggressive individuals. In Study 2, participants reported their mood before and after the showing of a violet or nonviolent videotape. High trait aggressive individuals felt more angry after viewing the violent videotape than did low trait aggressive individuals. In Study 3, participants first viewed either a violent or a nonviolent videotape and then competed with an \"opponent\" on a reaction time task in which the loser received a blast of unpleasant noise. Videotape violence was more likely to increase aggression in high trait aggressive individuals than in low trait aggressive individuals.",
"title": ""
},
{
"docid": "de761c4e3e79b5b4d056552e0a71a7b6",
"text": "A novel multiple-input multiple-output (MIMO) dielectric resonator antenna (DRA) for long term evolution (LTE) femtocell base stations is described. The proposed antenna is able to transmit and receive information independently using TE and HE modes in the LTE bands 12 (698-716 MHz, 728-746 MHz) and 17 (704-716 MHz, 734-746 MHz). A systematic design method based on perturbation theory is proposed to induce mode degeneration for MIMO operation. Through perturbing the boundary of the DRA, the amount of energy stored by a specific mode is changed as well as the resonant frequency of that mode. Hence, by introducing an adequate boundary perturbation, the TE and HE modes of the DRA will resonate at the same frequency and share a common impedance bandwidth. The simulated mutual coupling between the modes was as low as - 40 dB . It was estimated that in a rich scattering environment with an Signal-to-Noise Ratio (SNR) of 20 dB per receiver branch, the proposed MIMO DRA was able to achieve a channel capacity of 11.1 b/s/Hz (as compared to theoretical maximum 2 × 2 capacity of 13.4 b/s/Hz). Our experimental measurements successfully demonstrated the design methodology proposed in this work.",
"title": ""
}
] |
scidocsrr
|
2465d29191b3ef50436fd60e65b42940
|
A new rail inspection method based on deep learning using laser cameras
|
[
{
"docid": "e560cd7561d4f518cdab6bd1f5441de8",
"text": "Rail inspection is a very important task in railway maintenance, and it is periodically needed for preventing dangerous situations. Inspection is operated manually by trained human operator walking along the track searching for visual anomalies. This monitoring is unacceptable for slowness and lack of objectivity, as the results are related to the ability of the observer to recognize critical situations. The correspondence presents a patent-pending real-time Visual Inspection System for Railway (VISyR) maintenance, and describes how presence/absence of the fastening bolts that fix the rails to the sleepers is automatically detected. VISyR acquires images from a digital line-scan camera. Data are simultaneously preprocessed according to two discrete wavelet transforms, and then provided to two multilayer perceptron neural classifiers (MLPNCs). The \"cross validation\" of these MLPNCs avoids (practically-at-all) false positives, and reveals the presence/absence of the fastening bolts with an accuracy of 99.6% in detecting visible bolts and of 95% in detecting missing bolts. A field-programmable gate array-based architecture performs these tasks in 8.09 mus, allowing an on-the-fly analysis of a video sequence acquired at 200 km/h",
"title": ""
}
] |
[
{
"docid": "db75809bcc029a4105dc12c63e2eca76",
"text": "Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal ‘fingerprint’ of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease.",
"title": ""
},
{
"docid": "d319a17ad2fa46e0278e0b0f51832f4b",
"text": "Automatic Essay Assessor (AEA) is a system that utilizes information retrieval techniques such as Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Latent Dirichlet Allocation (LDA) for automatic essay grading. The system uses learning materials and relatively few teacher-graded essays for calibrating the scoring mechanism before grading. We performed a series of experiments using LSA, PLSA and LDA for document comparisons in AEA. In addition to comparing the methods on a theoretical level, we compared the applicability of LSA, PLSA, and LDA to essay grading with empirical data. The results show that the use of learning materials as training data for the grading model outperforms the k-NN-based grading methods. In addition to this, we found that using LSA yielded slightly more accurate grading than PLSA and LDA. We also found that the division of the learning materials in the training data is crucial. It is better to divide learning materials into sentences than paragraphs.",
"title": ""
},
{
"docid": "51b1f69c4bdc5fd034f482ad9ffa4549",
"text": "The synapse is the focus of experimental research and theory on the cellular mechanisms of nervous system plasticity and learning, but recent research is expanding the consideration of plasticity into new mechanisms beyond the synapse, notably including the possibility that conduction velocity could be modifiable through changes in myelin to optimize the timing of information transmission through neural circuits. This concept emerges from a confluence of brain imaging that reveals changes in white matter in the human brain during learning, together with cellular studies showing that the process of myelination can be influenced by action potential firing in axons. This Opinion article summarizes the new research on activity-dependent myelination, explores the possible implications of these studies and outlines the potential for new research.",
"title": ""
},
{
"docid": "dfa890a87b2e5ac80f61c793c8bca791",
"text": "Reinforcement learning (RL) algorithms have traditionally been thought of as trial and error learning methods that use actual control experience to incrementally improve a control policy. Sutton's DYNA architecture demonstrated that RL algorithms can work as well using simulated experience from an environment model, and that the resulting computation was similar to doing one-step lookahead planning. Inspired by the literature on hierarchical planning, I propose learning a hierarchy of models of the environment that abstract temporal detail as a means of improving the scalability of RL algorithms. I present H-DYNA (Hierarchical DYNA), an extension to Sutton's DYNA architecture that is able to learn such a hierarchy of abstract models. H-DYNA di ers from hierarchical planners in two ways: rst, the abstract models are learned using experience gained while learning to solve other tasks in the same environment, and second, the abstract models can be used to solve stochastic control tasks. Simulations on a set of compositionally-structured navigation tasks show that H-DYNA can learn to solve them faster than conventional RL algorithms. The abstract models also serve as mechanisms for achieving transfer of learning across multiple tasks.",
"title": ""
},
{
"docid": "c429bf418a4ecbd56c7b2ab6f4ca3cd6",
"text": "The Internet exhibits a gigantic measure of helpful data which is generally designed for its users, which makes it hard to extract applicable information from different sources. Accordingly, the accessibility of strong, adaptable Information Extraction framework that consequently concentrate structured data such as, entities, relationships between entities, and attributes from unstructured or semi-structured sources. But somewhere during extraction of information may lead to the loss of its meaning, which is absolutely not feasible. Semantic Web adds solution to this problem. It is about providing meaning to the data and allow the machine to understand and recognize these augmented data more accurately. The proposed system is about extracting information from research data of IT domain like journals of IEEE, Springer, etc., which aid researchers and the organizations to get the data of journals in an optimized manner so the time and hard work of surfing and reading the entire journal's papers or articles reduces. Also the accuracy of the system is taken care of using RDF, the data extracted has a specific declarative semantics so that the meaning of the research papers or articles during extraction remains unchanged. In addition, the same approach shall be applied on multiple documents, so that time factor can get saved.",
"title": ""
},
{
"docid": "16e2f269c21eaf2bf856bb0996ab8135",
"text": "In this paper, we present a cryptographic technique for an authenticated, end-to-end verifiable and secret ballot election. Voters should receive assurance that their vote is cast as intended, recorded as cast and tallied as recorded. The election system as a whole should ensure that voter coercion is unlikely, even when voters are willing to be influenced. Currently, almost all verifiable e-voting systems require trusted authorities to perform the tallying process. An exception is the DRE-i and DRE-ip system. The DRE-ip system removes the requirement of tallying authorities by encrypting ballot in such a way that the election tally can be publicly verified without decrypting cast ballots. However, the DRE-ip system necessitates a secure bulletin board (BB) for storing the encrypted ballot as without it the integrity of the system may be lost and the result can be compromised without detection during the audit phase. In this paper, we have modified the DRE-ip system so that if any recorded ballot is tampered by an adversary before the tallying phase, it will be detected during the tallying phase. In addition, we have described a method using zero knowledge based public blockchain to store these ballots so that it remains tamper proof. To the best of our knowledge, it is the first end-toend verifiable Direct-recording electronic (DRE) based e-voting system using blockchain. In our case, we assume that the bulletin board is insecure and an adversary has read and write access to the bulletin board. We have also added a secure biometric with government provided identity card based authentication mechanism for voter authentication. The proposed system is able to encrypt ballot in such a way that the election tally can be publicly verified without decrypting cast ballots maintaining end-to-end verifiability and without requiring the secure bulletin board.",
"title": ""
},
{
"docid": "202dc8823d3d16bc26653727ac1ef67f",
"text": "Near-sensor data analytics is a promising direction for internet-of-things endpoints, as it minimizes energy spent on communication and reduces network load - but it also poses security concerns, as valuable data are stored or sent over the network at various stages of the analytics pipeline. Using encryption to protect sensitive data at the boundary of the on-chip analytics engine is a way to address data security issues. To cope with the combined workload of analytics and encryption in a tight power envelope, we propose Fulmine, a system-on-chip (SoC) based on a tightly-coupled multi-core cluster augmented with specialized blocks for compute-intensive data processing and encryption functions, supporting software programmability for regular computing tasks. The Fulmine SoC, fabricated in 65-nm technology, consumes less than 20mW on average at 0.8V achieving an efficiency of up to 70pJ/B in encryption, 50pJ/px in convolution, or up to 25MIPS/mW in software. As a strong argument for real-life flexible application of our platform, we show experimental results for three secure analytics use cases: secure autonomous aerial surveillance with a state-of-the-art deep convolutional neural network (CNN) consuming 3.16pJ per equivalent reduced instruction set computer operation, local CNN-based face detection with secured remote recognition in 5.74pJ/op, and seizure detection with encrypted data collection from electroencephalogram within 12.7pJ/op.",
"title": ""
},
{
"docid": "9f68df51d0d47b539a6c42207536d012",
"text": "Schizophrenia-spectrum risk alleles may persist in the population, despite their reproductive costs in individuals with schizophrenia, through the possible creativity benefits of mild schizotypy in non-psychotic relatives. To assess this creativity-benefit model, we measured creativity (using 6 verbal and 8 drawing tasks), schizotypy, Big Five personality traits, and general intelligence in 225 University of New Mexico students. Multiple regression analyses showed that openness and intelligence, but not schizotypy, predicted reliable observer ratings of verbal and drawing creativity. Thus, the 'madness-creativity' link seems mediated by the personality trait of openness, and standard creativity-benefit models seem unlikely to explain schizophrenia's evolutionary persistence.",
"title": ""
},
{
"docid": "e51f4a7eb2e933057f18a625a6e926ff",
"text": "In this paper an integrated wide-band transition from a differential micro-strip line to a rectangular WR-15 waveguide is presented. The transition makes use of a cavity that is entirely integrated into the multilayer printed circuit board (PCB), which offers three layers (RF signal layer, ground plane and DC signal layer) for signal routing. The transition including the 18 mm long micro-strip feed lines provides a bandwidth of 20 GHz from 50 GHz to 70 GHz and an insertion loss of less than 2.3 dB. This makes the transition perfectly suited for differential wide-band transceivers operating in the 60 GHz band.",
"title": ""
},
{
"docid": "767da6eef531b3dc54d6600e9d238ffa",
"text": "This review paper focuses on the neonatal brain segmentation algorithms in the literature. It provides an overview of clinical magnetic resonance imaging (MRI) of the newborn brain and the challenges in automated tissue classification of neonatal brain MRI. It presents a complete survey of the existing segmentation methods and their salient features. The different approaches are categorized into intracranial and brain tissue segmentation algorithms based on their level of tissue classification. Further, the brain tissue segmentation techniques are grouped based on their atlas usage into atlas-based, augmented atlas-based and atlas-free methods. In addition, the research gaps and lacunae in literature are also identified.",
"title": ""
},
{
"docid": "b89099e9b01a83368a1ebdb2f4394eba",
"text": "Orangutans (Pongo pygmaeus and Pongo abelii) are semisolitary apes and, among the great apes, the most distantly related to humans. Raters assessed 152 orangutans on 48 personality descriptors; 140 of these orangutans were also rated on a subjective well-being questionnaire. Principal-components analysis yielded 5 reliable personality factors: Extraversion, Dominance, Neuroticism, Agreeableness, and Intellect. The authors found no factor analogous to human Conscientiousness. Among the orangutans rated on all 48 personality descriptors and the subjective well-being questionnaire, Extraversion, Agreeableness, and low Neuroticism were related to subjective well-being. These findings suggest that analogues of human, chimpanzee, and orangutan personality domains existed in a common ape ancestor.",
"title": ""
},
{
"docid": "1b78650b979b0043eeb3e7478a263846",
"text": "Our solutions was launched using a want to function as a full on-line digital local library that gives use of many PDF guide catalog. You may find many different types of e-guide as well as other literatures from my papers data bank. Specific popular topics that spread out on our catalog are famous books, answer key, assessment test questions and answer, guideline paper, training guideline, quiz test, consumer guide, consumer guidance, service instructions, restoration handbook, and so forth.",
"title": ""
},
{
"docid": "1a5c009f059ea28fd2d692d1de4eb913",
"text": "We present CROSSGRAD, a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD parallelly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.",
"title": ""
},
{
"docid": "69c8584255b16e6bc05fdfc6510d0dc4",
"text": "OBJECTIVE\nThis study assesses the psychometric properties of Ward's seven-subtest short form (SF) for WAIS-IV in a sample of adults with schizophrenia (SZ) and schizoaffective disorder.\n\n\nMETHOD\nSeventy patients diagnosed with schizophrenia or schizoaffective disorder were administered the full version of the WAIS-IV. Four different versions of the Ward's SF were then calculated. The subtests used were: Similarities, Digit Span, Arithmetic, Information, Coding, Picture Completion, and Block Design (BD version) or Matrix Reasoning (MR version). Prorated and regression-based formulae were assessed for each version.\n\n\nRESULTS\nThe actual and estimated factorial indexes reflected the typical pattern observed in schizophrenia. The four SFs correlated significantly with their full-version counterparts, but the Perceptual Reasoning Index (PRI) correlated below the acceptance threshold for all four versions. The regression-derived estimates showed larger differences compared to the full form. The four forms revealed comparable but generally low clinical category agreement rates for factor indexes. All SFs showed an acceptable reliability, but they were not correlated with clinical outcomes.\n\n\nCONCLUSIONS\nThe WAIS-IV SF offers a good estimate of WAIS-IV intelligence quotient, which is consistent with previous results. Although the overall scores are comparable between the four versions, the prorated forms provided a better estimation of almost all indexes. MR can be used as an alternative for BD without substantially changing the psychometric properties of the SF. However, we recommend a cautious use of these abbreviated forms when it is necessary to estimate the factor index scores, especially PRI, and Processing Speed Index.",
"title": ""
},
{
"docid": "865d7b8fae1cab739570229889177d58",
"text": "This paper presents design and implementation of scalar control of induction motor. This method leads to be able to adjust the speed of the motor by control the frequency and amplitude of the stator voltage of induction motor, the ratio of stator voltage to frequency should be kept constant, which is called as V/F or scalar control of induction motor drive. This paper presents a comparative study of open loop and close loop V/F control induction motor. The V/F",
"title": ""
},
{
"docid": "4f9df22aa072503e23384f62d4b5acdb",
"text": "Convolutional neural networks are designed for dense data, but vision data is often sparse (stereo depth, point clouds, pen stroke, etc.). We present a method to handle sparse depth data with optional dense RGB, and accomplish depth completion and semantic segmentation changing only the last layer. Our proposal efficiently learns sparse features without the need of an additional validity mask. We show how to ensure network robustness to varying input sparsities. Our method even works with densities as low as 0.8% (8 layer lidar), and outperforms all published state-of-the-art on the Kitti depth completion benchmark.",
"title": ""
},
{
"docid": "03cd6ef0cc0dab9f33b88dd7ae4227c2",
"text": "The dopaminergic system plays a pivotal role in the central nervous system via its five diverse receptors (D1–D5). Dysfunction of dopaminergic system is implicated in many neuropsychological diseases, including attention deficit hyperactivity disorder (ADHD), a common mental disorder that prevalent in childhood. Understanding the relationship of five different dopamine (DA) receptors with ADHD will help us to elucidate different roles of these receptors and to develop therapeutic approaches of ADHD. This review summarized the ongoing research of DA receptor genes in ADHD pathogenesis and gathered the past published data with meta-analysis and revealed the high risk of DRD5, DRD2, and DRD4 polymorphisms in ADHD.",
"title": ""
},
{
"docid": "3223d52743a64bc599488cdde8ef177b",
"text": "The resolution of a comparator is determined by the dc input offset and the ac noise. For mixed-mode applications with significant digital switching, input-referred supply noise can be a significant source of error. This paper proposes an offset compensation technique that can simultaneously minimize input-referred supply noise. Demonstrated with digital offset compensation, this scheme reduces input-referred supply noise to a small fraction (13%) of one least significant bit (LSB) digital offset. In addition, the same analysis can be applied to analog offset compensation.",
"title": ""
},
{
"docid": "97ac64bb4d06216253eacb17abfcb103",
"text": "UIMA Ruta is a rule-based system designed for information extraction tasks, but it is also applicable for many natural language processing use cases. This demonstration gives an overview of the UIMA Ruta Workbench, which provides a development environment and tooling for the rule language. It was developed to ease every step in engineering rule-based applications. In addition to the full-featured rule editor, the user is supported by explanation of the rule execution, introspection in results, automatic validation and rule induction. Furthermore, the demonstration covers the usage and combination of arbitrary components for natural language processing.",
"title": ""
}
] |
scidocsrr
|
3bfa12e3f0b02712b8ad6705a143a64a
|
iDedup: latency-aware, inline data deduplication for primary storage
|
[
{
"docid": "9361c6eaa2faaa3cfebc4a073ee8f3d3",
"text": "In this paper we present the analysis of two large-scale network file system workloads. We measured CIFS traffic for two enterprise-class file servers deployed in the NetApp data center for a three month period. One file server was used by marketing, sales, and finance departments and the other by the engineering department. Together these systems represent over 22 TB of storage used by over 1500 employees, making this the first ever large-scale study of the CIFS protocol. We analyzed how our network file system workloads compared to those of previous file system trace studies and took an in-depth look at access, usage, and sharing patterns. We found that our workloads were quite different from those previously studied; for example, our analysis found increased read-write file access patterns, decreased read-write ratios, more random file access, and longer file lifetimes. In addition, we found a number of interesting properties regarding file sharing, file re-use, and the access patterns of file types and users, showing that modern file system workload has changed in the past 5–10 years. This change in workload characteristics has implications on the future design of network file systems, which we describe in the paper.",
"title": ""
}
] |
[
{
"docid": "e83e6284d3c9cf8fddf972a25d869a1b",
"text": "Internet-based learning systems are being used in many universities and firms but their adoption requires a solid understanding of the user acceptance processes. Our effort used an extended version of the technology acceptance model (TAM), including cognitive absorption, in a formal empirical study to explain the acceptance of such systems. It was intended to provide insight for improving the assessment of on-line learning systems and for enhancing the underlying system itself. The work involved the examination of the proposed model variables for Internet-based learning systems acceptance. Using an on-line learning system as the target technology, assessment of the psychometric properties of the scales proved acceptable and confirmatory factor analysis supported the proposed model structure. A partial-least-squares structural modeling approach was used to evaluate the explanatory power and causal links of the model. Overall, the results provided support for the model as explaining acceptance of an on-line learning system and for cognitive absorption as a variable that influences TAM variables. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "49f96e96623502ffe6053cab43054edf",
"text": "Background YouTube, the online video creation and sharing site, supports both video content viewing and content creation activities. For a minority of people, the time spent engaging with YouTube can be excessive and potentially problematic. Method This study analyzed the relationship between content viewing, content creation, and YouTube addiction in a survey of 410 Indian-student YouTube users. It also examined the influence of content, social, technology, and process gratifications on user inclination toward YouTube content viewing and content creation. Results The results demonstrated that content creation in YouTube had a closer relationship with YouTube addiction than content viewing. Furthermore, social gratification was found to have a significant influence on both types of YouTube activities, whereas technology gratification did not significantly influence them. Among all perceived gratifications, content gratification had the highest relationship coefficient value with YouTube content creation inclination. The model fit and variance extracted by the endogenous constructs were good, which further validated the results of the analysis. Conclusion The study facilitates new ways to explore user gratification in using YouTube and how the channel responds to it.",
"title": ""
},
{
"docid": "59d3a3ec644d8554cbb2a5ac75a329f8",
"text": "Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 % of features can be achieved with a small loss of accuracy.",
"title": ""
},
{
"docid": "d6e5a1307b33f8cd5cdf598747365a52",
"text": "Harassment of Working Women, Catharine MacKinnon notes that men’s victimization of women “is sufficiently pervasive in American society as to be nearly invisible” (1979:1). Since the publication of her book, sexual harassment has become increasingly visible, and variants of MacKinnon’s broad sociocultural explanation have gained broad acceptance (Schultz 2001; MacKinnon 2002; Tangri, Burt, and Johnson 1982; Welsh 1999). In reaction to evidence that at least some male and adolescent workers are targets of sexual harassment (Kalof et al. 2001; Talbot 2002; Thacker 1996), critics have begun to challenge feminist views of sexual harassment as an act committed by powerful adult males against “powerless females” (Patai 1998:170) as founded on “unexamined notions of male ‘power’ and predatoriness” (Patai 1998:59; see also Francke 1997; Schultz 1998:95). Nevertheless, a systematic examination of the theory’s basic propositions about gender and power has yet to emerge in the social science literature. No empirical study of sexual harassment has appeared in the prominent general interest sociology journals American Journal of Sociology, American Sociological Review, or Social Forces (Sever 1996). In light of the impressive body of sociological theory around the phenomenon, a burgeoning research literature in the top specialty journals, and the strong public interest it has generated, this void is surprising. The neglect of sexual harassment in mainstream sociology also forestalls research that could have broad implications. In this paper we Sexual Harassment as a Gendered Expression of Power",
"title": ""
},
{
"docid": "c4d0084aab61645fc26e099115e1995c",
"text": "Digital transformation often includes establishing big data analytics capabilities and poses considerable challenges for traditional manufacturing organizations, such as car companies. Successfully introducing big data analytics requires substantial organizational transformation and new organizational structures and business processes. Based on the three-stage evolution of big data analytics capabilities at AUDI, the full article provides recommendations for how traditional manufacturing organizations can successfully introduce big data analytics and master the related organizational transformations. Stage I: Advancing. In Stage I, AUDI’s sales and marketing department initiated data analytics projects. Commitment within the organization for data analytics grew slowly, and the strategic importance of the area was increasingly recognized. During this first stage, the IT department played a passive role, responding to the initiators of data analytics projects. The company’s digital innovation hub, however, laid the technology foundation for big data analytics during the Advancing stage. Stage II: Enabling. In Stage II, analytics competencies were built up not only in the digital innovation hub but also in the IT department. The IT department enabled big data analytics through isolated technology activities, sometimes taking on or insourcing tasks previously carried out by external consultancies or the digital innovation hub. Analytics services were developed through a more advanced technology infrastructure as well as analytics methods. Stage III: Leveraging. In the current Stage III, AUDI is leveraging the analytics competencies of the digital innovation hub and the IT department to centrally provide analytics-as-a-service. The IT department is now fully responsible for all technology tasks and is evolving to become a consulting partner for the other big data analytics stakeholders (sales and marketing department and digital innovation hub). In particular, digital services are enabled by leveraging the most valuable data source (i.e., operational car data).",
"title": ""
},
{
"docid": "a43646db20923d9058df5544a5753da0",
"text": "Smart objects connected to the Internet, constituting the so called Internet of Things (IoT), are revolutionizing human beings' interaction with the world. As technology reaches everywhere, anyone can misuse it, and it is always essential to secure it. In this work we present a denial-of-service (DoS) detection architecture for 6LoWPAN, the standard protocol designed by IETF as an adaptation layer for low-power lossy networks enabling low-power devices to communicate with the Internet. The proposed architecture integrates an intrusion detection system (IDS) into the network framework developed within the EU FP7 project ebbits. The aim is to detect DoS attacks based on 6LoWPAN. In order to evaluate the performance of the proposed architecture, preliminary implementation was completed and tested against a real DoS attack using a penetration testing system. The paper concludes with the related results proving to be successful in detecting DoS attacks on 6LoWPAN. Further, extending the IDS could lead to detect more complex attacks on 6LoWPAN.",
"title": ""
},
{
"docid": "f418441593da8db1dcbaa922cccc21fa",
"text": "Sentiment analysis, as a heatedly-discussed research topic in the area of information extraction, has attracted more attention from the beginning of this century. With the rapid development of the Internet, especially the rising popularity of Web2.0 technology, network user has become not only the content maker, but also the receiver of information. Meanwhile, benefiting from the development and maturity of the technology in natural language processing and machine learning, we can widely employ sentiment analysis on subjective texts. In this paper, we propose a supervised learning method on fine-grained sentiment analysis to meet the new challenges by exploring new research ideas and methods to further improve the accuracy and practicability of sentiment analysis. First, this paper presents an improved strength computation method of sentiment word. Second, this paper introduces a sentiment information joint recognition model based on Conditional Random Fields and analyzes the related knowledge of the basic and semantic features. Finally, the experimental results show that our approach and a demo system are feasible and effective.",
"title": ""
},
{
"docid": "545562f49534f9cf502f420e2e6fa420",
"text": "Automatic optimization of spoken dialog management policies that are robust to environmental noise has long been the goal for both academia and industry. Approaches based on reinforcement learning have been proved to be effective. However, the numerical representation of dialog policy is human-incomprehensible and difficult for dialog system designers to verify or modify, which limits its practical application. In this paper we propose a novel framework for optimizing dialog policies specified in domain language using genetic algorithm. The human-interpretable representation of policy makes the method suitable for practical employment. We present learning algorithms using user simulation and real human-machine dialogs respectively. Empirical experimental results are given to show the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "6d2ebecdd8120fb6bcfa805bd62d2899",
"text": "The oxidation of organic and inorganic compounds during ozonation can occur via ozone or OH radicals or a combination thereof. The oxidation pathway is determined by the ratio of ozone and OH radical concentrations and the corresponding kinetics. A huge database with several hundred rate constants for ozone and a few thousand rate constants for OH radicals is available. Ozone is an electrophile with a high selectivity. The second-order rate constants for oxidation by ozone vary over 10 orders of magnitude, between o0.1M s 1 and about 7 10M s . The reactions of ozone with drinking-water relevant inorganic compounds are typically fast and occur by an oxygen atom transfer reaction. Organic micropollutants are oxidized with ozone selectively. Ozone reacts mainly with double bonds, activated aromatic systems and non-protonated amines. In general, electron-donating groups enhance the oxidation by ozone whereas electron-withdrawing groups reduce the reaction rates. Furthermore, the kinetics of direct ozone reactions depend strongly on the speciation (acid-base, metal complexation). The reaction of OH radicals with the majority of inorganic and organic compounds is nearly diffusion-controlled. The degree of oxidation by ozone and OH radicals is given by the corresponding kinetics. Product formation from the ozonation of organic micropollutants in aqueous systems has only been established for a few compounds. It is discussed for olefines, amines and aromatic compounds. r 2002 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e3caf8dcb01139ae780616c022e1810d",
"text": "The relative age effect (RAE) and its relationships with maturation, anthropometry, and physical performance characteristics were examined across a representative sample of English youth soccer development programmes. Birth dates of 1,212 players, chronologically age-grouped (i.e., U9's-U18's), representing 17 professional clubs (i.e., playing in Leagues 1 & 2) were obtained and categorised into relative age quartiles from the start of the selection year (Q1 = Sep-Nov; Q2 = Dec-Feb; Q3 = Mar-May; Q4 = Jun-Aug). Players were measured for somatic maturation and performed a battery of physical tests to determine aerobic fitness (Multi-Stage Fitness Test [MSFT]), Maximal Vertical Jump (MVJ), sprint (10 & 20m), and agility (T-Test) performance capabilities. Odds ratio's (OR) revealed Q1 players were 5.3 times (95% confidence intervals [CI]: 4.08-6.83) more likely to be selected than Q4's, with a particularly strong RAE bias observed in U9 (OR: 5.56) and U13-U16 squads (OR: 5.45-6.13). Multivariate statistical models identified few between quartile differences in anthropometric and fitness characteristics, and confirmed chronological age-group and estimated age at peak height velocity (APHV) as covariates. Assessment of practical significance using magnitude-based inferences demonstrated body size advantages in relatively older players (Q1 vs. Q4) that were very-likely small (Effect Size [ES]: 0.53-0.57), and likely to very-likely moderate (ES: 0.62-0.72) in U12 and U14 squads, respectively. Relatively older U12-U14 players also demonstrated small advantages in 10m (ES: 0.31-0.45) and 20m sprint performance (ES: 0.36-0.46). The data identify a strong RAE bias at the entry-point to English soccer developmental programmes. RAE was also stronger circa-PHV, and relatively older players demonstrated anaerobic performance advantages during the pubescent period. Talent selectors should consider motor function and maturation status assessments to avoid premature and unwarranted drop-out of soccer players within youth development programmes.",
"title": ""
},
{
"docid": "8be48d08aec21ecdf8a124fa3fef8d48",
"text": "Topic modeling has become a widely used tool for document management. However, there are few topic models distinguishing the importance of documents on different topics. In this paper, we propose a framework LIMTopic to incorporate link based importance into topic modeling. To instantiate the framework, RankTopic and HITSTopic are proposed by incorporating topical pagerank and topical HITS into topic modeling respectively. Specifically, ranking methods are first used to compute the topical importance of documents. Then, a generalized relation is built between link importance and topic modeling. We empirically show that LIMTopic converges after a small number of iterations in most experimental settings. The necessity of incorporating link importance into topic modeling is justified based on KL-Divergences between topic distributions converted from topical link importance and those computed by basic topic models. To investigate the document network summarization performance of topic models, we propose a novel measure called log-likelihood of ranking-integrated document-word matrix. Extensive experimental results show that LIMTopic performs better than baseline models in generalization performance, document clustering and classification, topic interpretability and document network summarization performance. Moreover, RankTopic has comparable performance with relational topic model (RTM) and HITSTopic performs much better than baseline models in document clustering and classification.",
"title": ""
},
{
"docid": "a132eacb1a33c6b6d7a52b2b2f1be096",
"text": "People nowadays usually participate in multiple online social networks simultaneously to enjoy more social network services. Besides the common users, social networks providing similar services can also share many other kinds of information entities, e.g., locations, videos and products. However, these shared information entities in different networks are mostly isolated without any known corresponding connections. In this paper, we aim at inferring such potential corresponding connections linking multiple kinds of shared entities across networks simultaneously. Formally, the problem is referred to as the network “Partial Co-alignmenT” (PCT) problem. PCT is an important problem and can be the prerequisite for many concrete cross-network applications, like social network fusion, mutual information exchange and transfer. Meanwhile, the PCT problem is also very challenging to address due to various reasons, like (1) the heterogeneity of social networks, (2) lack of training instances to build models, and (3) one-to-one constraint on the correspondence connections. To resolve these challenges, a novel unsupervised network alignment framework, UNICOAT (UNsupervIsed COncurrent AlignmenT)), is introduced in this paper. Based on the heterogeneous information, UNICOAT transforms the PCT problem into a joint optimization problem. To solve the objective function, the one-to-one constraint on the corresponding relationships is relaxed, and the redundant non-existing corresponding connections introduced by such a relaxation will be pruned with a novel network co-matching algorithm proposed in this paper. Extensive experiments conducted on real-world coaligned social network datasets demonstrate the effectiveness of UNICOAT in addressing the PCT problem.",
"title": ""
},
{
"docid": "c6d6d5eb5fe80a9a54df948a2483a255",
"text": "Image Steganography is the process of embedding text in images such that its existence cannot be detected by Human Visual System (HVS) and is known only to sender and receiver. This paper presents a novel approach for image steganography using Hue-Saturation-Intensity (HSI) color space based on Least Significant Bit (LSB). The proposed method transforms the image from RGB color space to Hue-Saturation-Intensity (HSI) color space and then embeds secret data inside the Intensity Plane (I-Plane) and transforms it back to RGB color model after embedding. The said technique is evaluated by both subjective and Objective Analysis. Experimentally it is found that the proposed method have larger Peak Signal-to Noise Ratio (PSNR) values, good imperceptibility and multiple security levels which shows its superiority as compared to several existing methods.",
"title": ""
},
{
"docid": "6fca3aabf3812746a98bb7d5fb758a22",
"text": "The emergence and global spread of the 2009 pandemic H1N1 influenza virus reminds us that we are limited in the strategies available to control influenza infection. Vaccines are the best option for the prophylaxis and control of a pandemic; however, the lag time between virus identification and vaccine distribution exceeds 6 months and concerns regarding vaccine safety are a growing issue leading to vaccination refusal. In the short-term, antiviral therapy is vital to control the spread of influenza. However, we are currently limited to four licensed anti-influenza drugs: the neuraminidase inhibitors oseltamivir and zanamivir, and the M2 ion-channel inhibitors amantadine and rimantadine. The value of neuraminidase inhibitors was clearly established during the initial phases of the 2009 pandemic when vaccines were not available, i.e. stockpiles of antivirals are valuable. Unfortunately, as drug-resistant variants continue to emerge naturally and through selective pressure applied by use of antiviral drugs, the efficacy of these drugs declines. Because we cannot predict the strain of influenza virus that will cause the next epidemic or pandemic, it is important that we develop novel anti-influenza drugs with broad reactivity against all strains and subtypes, and consider moving to multiple drug therapy in the future. In this article we review the experimental data on investigational antiviral agents undergoing clinical trials (parenteral zanamivir and peramivir, long-acting neuraminidase inhibitors and the polymerase inhibitor favipiravir [T-705]) and experimental antiviral agents that target either the virus (the haemagglutinin inhibitor cyanovirin-N and thiazolides) or the host (fusion protein inhibitors [DAS181], cyclo-oxygenase-2 inhibitors and peroxisome proliferator-activated receptor agonists).",
"title": ""
},
{
"docid": "1ec0f3975731aa45c92973024c33a9b6",
"text": "This meta-analysis provides an extensive and organized summary of intervention studies in education that are grounded in motivation theory. We identified 74 published and unpublished papers that experimentally manipulated an independent variable and measured an authentic educational outcome within an ecologically valid educational context. Our analyses included 92 independent effect sizes with 38,377 participants. Our results indicated that interventions were generally effective, with an average mean effect size of d = 0.49 (95% confidence interval = [0.43, 0.56]). Although there were descriptive differences in the effect sizes across several moderator variables considered in our analyses, the only significant difference found was for the type of experimental design, with randomized designs having smaller effect sizes than quasi-experimental designs. This work illustrates the extent to which interventions and accompanying theories have been tested via experimental methods and provides information about appropriate next steps in developing and testing effective motivation interventions in education.",
"title": ""
},
{
"docid": "d8befc5eb47ac995e245cf9177c16d3d",
"text": "Our hypothesis is that the video game industry, in the attempt to simulate a realistic experience, has inadvertently collected very accurate data which can be used to solve problems in the real world. In this paper we describe a novel approach to soccer match prediction that makes use of only virtual data collected from a video game(FIFA 2015). Our results were comparable and in some places better than results achieved by predictors that used real data. We also use the data provided for each player and the players present in the squad, to analyze the team strategy. Based on our analysis, we were able to suggest better strategies for weak teams",
"title": ""
},
{
"docid": "5838d6a17e2223c6421da33d5985edd1",
"text": "In this article, I provide commentary on the Rudd et al. (2009) article advocating thorough informed consent with suicidal clients. I examine the Rudd et al. recommendations in light of their previous empirical-research and clinical-practice articles on suicidality, and from the perspective of clinical practice with suicidal clients in university counseling center settings. I conclude that thorough informed consent is a clinical intervention that is still in preliminary stages of development, necessitating empirical research and clinical training before actual implementation as an ethical clinical intervention. (PsycINFO Database Record (c) 2010 APA, all rights reserved).",
"title": ""
},
{
"docid": "4d58a451c018b25aaab9ab1312a0998c",
"text": "This paper presents a set of techniques that makes constraint programming a technique of choice for solving small (up to 30 nodes) traveling salesman problems. These techniques include a propagation scheme to avoid intermediate cycles (a global constraint), a branching scheme and a redundant constraint that can be used as a bounding method. The resulting improvement is that we can solve problems twice larger than those solved previously with constraint programming tools. We evaluate the use of Lagrangean Relaxation to narrow the gap between constraint programming and other Operations Research techniques and we show that improved constraint propagation has now a place in the array of techniques that should be used to solve a traveling salesman problem.",
"title": ""
},
{
"docid": "229a541fa4b8e9157c8cc057ae028676",
"text": "The proposed system introduces a new genetic algorithm for prediction of financial performance with input data sets from a financial domain. The goal is to produce a GA-based methodology for prediction of stock market performance along with an associative classifier from numerical data. This work restricts the numerical data to stock trading data. Stock trading data contains the quotes of stock market. From this information, many technical indicators can be extracted, and by investigating the relations between these indicators trading signals can discovered. Genetic algorithm is being used to generate all the optimized relations among the technical indicator and its value. Along with genetic algorithm association rule mining algorithm is used for generation of association rules among the various Technical Indicators. Associative rules are generated whose left side contains a set of trading signals, expressed by relations among the technical indicators, and whose right side indicates whether there is a positive ,negative or no change. The rules are being further given to the classification process which will be able to classify the new data making use of the previously generated rules. The proposed idea in the paper is to offer an efficient genetic algorithm in combination with the association rule mining algorithm which predicts stock market performance. Keywords— Genetic Algorithm, Associative Rule Mining, Technical Indicators, Associative rules, Stock Market, Numerical Data, Rules INTRODUCTION Over the last decades, there has been much research interests directed at understanding and predicting future. Among them, to forecast price movements in stock markets is a major challenge confronting investors, speculator and businesses. How to make a right decision in stock trading extracts many attentions from many financial and technical fields. Many technologies such as evolutionary optimization methods have been studied to help people find better way to earn more profit from the stock market. And the data mining method shows its power to improve the accuracy of stock movement prediction, with which more profit can be obtained with less risk. Applications of data mining techniques for stock investment include clustering, decision tree etc. Moreover, researches on stock market discover trading signals and timings from financial data. Because of the numerical attributes used, data mining techniques, such as decision tree, have weaker capabilities to handle this kind of numerical data and there are infinitely many possible ways to enumerate relations among data. Stock prices depend on various factors, the important ones being the market sentiment, performance of the industry, earning results and projected earnings, takeover or merger, introduction of a new product or introduction of an existing product into new markets, share buy-back, announcements of dividends/bonuses, addition or removal from the index and such other factors leading to a positive or negative impact on the share price and the associated volumes. Apart from the basic technical and fundamental analysis techniques used in stock market analysis and prediction, soft computing methods based on Association Rule Mining, fuzzy logic, neural networks, genetic algorithms etc. are increasingly finding their place in understanding and predicting the financial markets. Genetic algorithm has a great capability to discover good solutions rapidly for difficult high dimensional problems. The genetic algorithm has good capability to deal with numerical data and relations between numerical data. Genetic algorithms have emerged as a powerful general purpose search and optimization technique and have found applications in widespread areas. Associative classification, one of the most important tasks in data mining and knowledge discovery, builds a classification system based on associative classification rules. Association rules are learned and extracted from the available training dataset and the most suitable rules are selected to build an associative classification model. Association rule discovery has been used with great success in International Journal of Engineering Research and General Science Volume 3, Issue 1, January-February, 2015 ISSN 2091-273",
"title": ""
},
{
"docid": "adb6144e24291071f6c80e1190582f4e",
"text": "Molecular docking is an important method in computational drug discovery. In large-scale virtual screening, millions of small drug-like molecules (chemical compounds) are compared against a designated target protein (receptor). Depending on the utilized docking algorithm for screening, this can take several weeks on conventional HPC systems. However, for certain applications including large-scale screening tasks for newly emerging infectious diseases such high runtimes can be highly prohibitive. In this paper, we investigate how the massively parallel neo-heterogeneous architecture of Tianhe-2 Supercomputer consisting of thousands of nodes comprising CPUs and MIC coprocessors that can efficiently be used for virtual screening tasks. Our proposed approach is based on a coordinated parallel framework called mD3DOCKxb in which CPUs collaborate with MICs to achieve high hardware utilization. mD3DOCKxb comprises a novel efficient communication engine for dynamic task scheduling and load balancing between nodes in order to reduce communication and I/O latency. This results in a highly scalable implementation with parallel efficiency of over 84% (strong scaling) when executing on 8,000 Tianhe-2 nodes comprising 192,000 CPU cores and 1,368,000 MIC cores.",
"title": ""
}
] |
scidocsrr
|
6323ee41481aa633455b839b29dd1eea
|
A Binning Scheme for Fast Hard Drive Based Image Search
|
[
{
"docid": "7eec1e737523dc3b78de135fc71b058f",
"text": "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches",
"title": ""
},
{
"docid": "3982c66e695fdefe36d8d143247add88",
"text": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.",
"title": ""
}
] |
[
{
"docid": "f67e221a12e0d8ebb531a1e7c80ff2ff",
"text": "Fine-grained image classification is to recognize hundreds of subcategories belonging to the same basic-level category, such as 200 subcategories belonging to the bird, which is highly challenging due to large variance in the same subcategory and small variance among different subcategories. Existing methods generally first locate the objects or parts and then discriminate which subcategory the image belongs to. However, they mainly have two limitations: 1) relying on object or part annotations which are heavily labor consuming; and 2) ignoring the spatial relationships between the object and its parts as well as among these parts, both of which are significantly helpful for finding discriminative parts. Therefore, this paper proposes the object-part attention model (OPAM) for weakly supervised fine-grained image classification and the main novelties are: 1) object-part attention model integrates two level attentions: object-level attention localizes objects of images, and part-level attention selects discriminative parts of object. Both are jointly employed to learn multi-view and multi-scale features to enhance their mutual promotion; and 2) Object-part spatial constraint model combines two spatial constraints: object spatial constraint ensures selected parts highly representative and part spatial constraint eliminates redundancy and enhances discrimination of selected parts. Both are jointly employed to exploit the subtle and local differences for distinguishing the subcategories. Importantly, neither object nor part annotations are used in our proposed approach, which avoids the heavy labor consumption of labeling. Compared with more than ten state-of-the-art methods on four widely-used datasets, our OPAM approach achieves the best performance.",
"title": ""
},
{
"docid": "e051c1dafe2a2f45c48a79c320894795",
"text": "In this paper we present a graph-based model that, utilizing relations between groups of System-calls, detects whether an unknown software sample is malicious or benign, and classifies a malicious software to one of a set of known malware families. More precisely, we utilize the System-call Dependency Graphs (or, for short, ScD-graphs), obtained by traces captured through dynamic taint analysis. We design our model to be resistant against strong mutations applying our detection and classification techniques on a weighted directed graph, namely Group Relation Graph, or Gr-graph for short, resulting from ScD-graph after grouping disjoint subsets of its vertices. For the detection process, we propose the $$\\Delta $$ Δ -similarity metric, and for the process of classification, we propose the SaMe-similarity and NP-similarity metrics consisting the SaMe-NP similarity. Finally, we evaluate our model for malware detection and classification showing its potentials against malicious software measuring its detection rates and classification accuracy.",
"title": ""
},
{
"docid": "fb70de7ed3e42c37b130686bfa3aee47",
"text": "Data from vehicles instrumented with GPS or other localization technologies are increasingly becoming widely available due to the investments in Connected and Automated Vehicles (CAVs) and the prevalence of personal mobile devices such as smartphones. Tracking or trajectory data from these probe vehicles are already being used in practice for travel time or speed estimation and for monitoring network conditions. However, there has been limited work on extracting other critical traffic flow variables, in particular density and flow, from probe data. This paper presents a microscopic approach (akin to car-following) for inferring the number of unobserved vehicles in between a set of probe vehicles in the traffic stream. In particular, we develop algorithms to extract and exploit the somewhat regular patterns in the trajectories when the probe vehicles travel through stop-and-go waves in congested traffic. Using certain critical points of trajectories as the input, the number of unobserved vehicles between consecutive probes are then estimated through a Naïve Bayes model. The parameters needed for the Naïve Bayes include means and standard deviations for the probability density functions (pdfs) for the distance headways between vehicles. These parameters are estimated through supervised as well as unsupervised learning methods. The proposed ideas are tested based on the trajectory data collected from US 101 and I-80 in California for the FHWA's NGSIM (next generation simulation) project. Under the dense traffic conditions analyzed, the results show that the number of unobserved vehicles between two probes can be predicted with an accuracy of ±1 vehicle almost always.",
"title": ""
},
{
"docid": "9fc6244b3d0301a8486d44d58cf95537",
"text": "The aim of this paper is to explore some, ways of linking ethnographic studies of work in context with the design of CSCW systems. It uses examples from an interdisciplinary collaborative project on air traffic control. Ethnographic methods are introduced, and applied to identifying the social organization of this cooperative work, and the use of instruments within it. On this basis some metaphors for the electronic representation of current manual practices are presented, and their possibilities and limitations are discussed.",
"title": ""
},
{
"docid": "b5967a8dc6a8349b2f5c1d3070369d3c",
"text": "Hereditary xerocytosis is thought to be a rare genetic condition characterized by red blood cell (RBC) dehydration with mild hemolysis. RBC dehydration is linked to reduced Plasmodium infection in vitro; however, the role of RBC dehydration in protection against malaria in vivo is unknown. Most cases of hereditary xerocytosis are associated with gain-of-function mutations in PIEZO1, a mechanically activated ion channel. We engineered a mouse model of hereditary xerocytosis and show that Plasmodium infection fails to cause experimental cerebral malaria in these mice due to the action of Piezo1 in RBCs and in T cells. Remarkably, we identified a novel human gain-of-function PIEZO1 allele, E756del, present in a third of the African population. RBCs from individuals carrying this allele are dehydrated and display reduced Plasmodium infection in vitro. The existence of a gain-of-function PIEZO1 at such high frequencies is surprising and suggests an association with malaria resistance.",
"title": ""
},
{
"docid": "f86b052520e3950a2b580323252dbfde",
"text": "In this paper, novel radial basis function-neural network (RBF-NN) models are presented for the efficient filling of the coupling matrix of the method of moments (MoM). Two RBF-NNs are trained to calculate the majority of elements in the coupling matrix. The rest of elements are calculated using the conventional MoM, hence the technique is referred to as neural network-method of moments (NN-MoM). The proposed NN-MoM is applied to the analysis of a number of microstrip patch antenna arrays. The results show that NN-MoM is both accurate and fast. The proposed technique is general and it is convenient to integrate with MoM planar solvers.",
"title": ""
},
{
"docid": "88ff3300dafab6b87d770549a1dc4f0e",
"text": "Novelty search is a recent algorithm geared toward exploring search spaces without regard to objectives. When the presence of constraints divides a search space into feasible space and infeasible space, interesting implications arise regarding how novelty search explores such spaces. This paper elaborates on the problem of constrained novelty search and proposes two novelty search algorithms which search within both the feasible and the infeasible space. Inspired by the FI-2pop genetic algorithm, both algorithms maintain and evolve two separate populations, one with feasible and one with infeasible individuals, while each population can use its own selection method. The proposed algorithms are applied to the problem of generating diverse but playable game levels, which is representative of the larger problem of procedural game content generation. Results show that the two-population constrained novelty search methods can create, under certain conditions, larger and more diverse sets of feasible game levels than current methods of novelty search, whether constrained or unconstrained. However, the best algorithm is contingent on the particularities of the search space and the genetic operators used. Additionally, the proposed enhancement of offspring boosting is shown to enhance performance in all cases of two-population novelty search.",
"title": ""
},
{
"docid": "15ce175cc7aa263ded19c0ef344d9a61",
"text": "This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-ofthe-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.",
"title": ""
},
{
"docid": "2ed183563bd5cdaafa96b03836883730",
"text": "This is an introduction to the Classic Paper on MOSFET scaling by R. Dennardet al., “Design of Ion-Implanted MOSFET’s with Very Small Physical Dimensions,” published in the IEEE Journal of Solid-State Circuitsin October 1974. The history of scaling and its application to very large scale integration (VLSI) MOSFET technology is traced from 1970 to 1998. The role of scaling in the profound improvements in power delay product over the last three decades is analyzed in basic terms.",
"title": ""
},
{
"docid": "ec2d9c12a906eb999e7a178d0f672073",
"text": "Passive-dynamic walkers are simple mechanical devices, composed of solid parts connected by joints, that walk stably down a slope. They have no motors or controllers, yet can have remarkably humanlike motions. This suggests that these machines are useful models of human locomotion; however, they cannot walk on level ground. Here we present three robots based on passive-dynamics, with small active power sources substituted for gravity, which can walk on level ground. These robots use less control and less energy than other powered robots, yet walk more naturally, further suggesting the importance of passive-dynamics in human locomotion.",
"title": ""
},
{
"docid": "0ea239ac71e65397d0713fe8c340f67c",
"text": "Mutations in leucine-rich repeat kinase 2 (LRRK2) are a common cause of familial and sporadic Parkinson's disease (PD). Elevated LRRK2 kinase activity and neurodegeneration are linked, but the phosphosubstrate that connects LRRK2 kinase activity to neurodegeneration is not known. Here, we show that ribosomal protein s15 is a key pathogenic LRRK2 substrate in Drosophila and human neuron PD models. Phosphodeficient s15 carrying a threonine 136 to alanine substitution rescues dopamine neuron degeneration and age-related locomotor deficits in G2019S LRRK2 transgenic Drosophila and substantially reduces G2019S LRRK2-mediated neurite loss and cell death in human dopamine and cortical neurons. Remarkably, pathogenic LRRK2 stimulates both cap-dependent and cap-independent mRNA translation and induces a bulk increase in protein synthesis in Drosophila, which can be prevented by phosphodeficient T136A s15. These results reveal a novel mechanism of PD pathogenesis linked to elevated LRRK2 kinase activity and aberrant protein synthesis in vivo.",
"title": ""
},
{
"docid": "4ae0bb75493e5d430037ba03fcff4054",
"text": "David Moher is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, and the Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. Alessandro Liberati is at the Università di Modena e Reggio Emilia, Modena, and the Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy. Jennifer Tetzlaff is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario. Douglas G Altman is at the Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom. Membership of the PRISMA Group is provided in the Acknowledgements.",
"title": ""
},
{
"docid": "f1f72a6d5d2ab8862b514983ac63480b",
"text": "Grids are commonly used as histograms to process spatial data in order to detect frequent patterns, predict destinations, or to infer popular places. However, they have not been previously used for GPS trajectory similarity searches or retrieval in general. Instead, slower and more complicated algorithms based on individual point-pair comparison have been used. We demonstrate how a grid representation can be used to compute four different route measures: novelty, noteworthiness, similarity, and inclusion. The measures may be used in several applications such as identifying taxi fraud, automatically updating GPS navigation software, optimizing traffic, and identifying commuting patterns. We compare our proposed route similarity measure, C-SIM, to eight popular alternatives including Edit Distance on Real sequence (EDR) and Frechet distance. The proposed measure is simple to implement and we give a fast, linear time algorithm for the task. It works well under noise, changes in sampling rate, and point shifting. We demonstrate that by using the grid, a route similarity ranking can be computed in real-time on the Mopsi20141 route dataset, which consists of over 6,000 routes. This ranking is an extension of the most similar route search and contains an ordered list of all similar routes from the database. The real-time search is due to indexing the cell database and comes at the cost of spending 80% more memory space for the index. The methods are implemented inside the Mopsi2 route module.",
"title": ""
},
{
"docid": "ab589fb1d97849e95da05d7e9b1d0f4f",
"text": "We introduce a new speaker independent method for reducing wind noise in single-channel recordings of noisy speech. The method is based on non-negative sparse coding and relies on a wind noise dictionary which is estimated from an isolated noise recording. We estimate the parameters of the model and discuss their sensitivity. We then compare the algorithm with the classical spectral subtraction method and the Qualcomm-ICSI-OGI noise reduction method. We optimize the sound quality in terms of signal-to-noise ratio and provide results on a noisy speech recognition task.",
"title": ""
},
{
"docid": "18c8fcba57c295568942fa40b605c27e",
"text": "The Internet of Things (IoT), an emerging global network of uniquely identifiable embedded computing devices within the existing Internet infrastructure, is transforming how we live and work by increasing the connectedness of people and things on a scale that was once unimaginable. In addition to increased communication efficiency between connected objects, the IoT also brings new security and privacy challenges. Comprehensive measures that enable IoT device authentication and secure access control need to be established. Existing hardware, software, and network protection methods, however, are designed against fraction of real security issues and lack the capability to trace the provenance and history information of IoT devices. To mitigate this shortcoming, we propose an RFID-enabled solution that aims at protecting endpoint devices in IoT supply chain. We take advantage of the connection between RFID tag and control chip in an IoT device to enable data transfer from tag memory to centralized database for authentication once deployed. Finally, we evaluate the security of our proposed scheme against various attacks.",
"title": ""
},
{
"docid": "990fb61d1135b05f88ae02eb71a6983f",
"text": "Previous efforts in recommendation of candidates for talent search followed the general pattern of receiving an initial search criteria and generating a set of candidates utilizing a pre-trained model. Traditionally, the generated recommendations are final, that is, the list of potential candidates is not modified unless the user explicitly changes his/her search criteria. In this paper, we are proposing a candidate recommendation model which takes into account the immediate feedback of the user, and updates the candidate recommendations at each step. This setting also allows for very uninformative initial search queries, since we pinpoint the user's intent due to the feedback during the search session. To achieve our goal, we employ an intent clustering method based on topic modeling which separates the candidate space into meaningful, possibly overlapping, subsets (which we call intent clusters) for each position. On top of the candidate segments, we apply a multi-armed bandit approach to choose which intent cluster is more appropriate for the current session. We also present an online learning scheme which updates the intent clusters within the session, due to user feedback, to achieve further personalization. Our offline experiments as well as the results from the online deployment of our solution demonstrate the benefits of our proposed methodology.",
"title": ""
},
{
"docid": "1d12470ab31735721a1f50ac48ac65bd",
"text": "In this work, we investigate the role of relational bonds in keeping students engaged in online courses. Specifically, we quantify the manner in which students who demonstrate similar behavior patterns influence each other’s commitment to the course through their interaction with them either explicitly or implicitly. To this end, we design five alternative operationalizations of relationship bonds, which together allow us to infer a scaled measure of relationship between pairs of students. Using this, we construct three variables, namely number of significant bonds, number of significant bonds with people who have dropped out in the previous week, and number of such bonds with people who have dropped in the current week. Using a survival analysis, we are able to measure the prediction strength of these variables with respect to dropout at each time point. Results indicate that higher numbers of significant bonds predicts lower rates of dropout; while loss of significant bonds is associated with higher rates of dropout.",
"title": ""
},
{
"docid": "9ceb26a83e77ac304272625a148c504e",
"text": "This article presents the architecture of Junior, a robotic vehicle capable of navigating urban environments autonomously. In doing so, the ve icle is able to select its own routes, perceive and interact with other traffic, and execute various urban driving skills including lane changes, U-turns, parking, a nd merging into moving traffic. The vehicle successfully finished and won second pla ce in the DARPA Urban Challenge, a robot competition organized by the U.S. Gove rnm nt.",
"title": ""
},
{
"docid": "b4b9952da82739fc79ecf949ddcd8e05",
"text": "Light field depth estimation is an essential part of many light field applications. Numerous algorithms have been developed using various light field characteristics. However, conventional methods fail when handling noisy scene with occlusion. To remedy this problem, we present a light field depth estimation method which is more robust to occlusion and less sensitive to noise. Novel data costs using angular entropy metric and adaptive defocus response are introduced. Integration of both data costs improves the occlusion and noise invariant capability significantly. Cost volume filtering and graph cut optimization are utilized to improve the accuracy of the depth map. Experimental results confirm that the proposed method is robust and achieves high quality depth maps in various scenes. The proposed method outperforms the state-of-the-art light field depth estimation methods in qualitative and quantitative evaluation.",
"title": ""
},
{
"docid": "afe36d039098b94a77ea58fa56bd895d",
"text": "We present a framework to automatically detect and remove shadows in real world scenes from a single image. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The features are learned at the super-pixel level and along the dominant boundaries in the image. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow masks. Using the detected shadow masks, we propose a Bayesian formulation to accurately extract shadow matte and subsequently remove shadows. The Bayesian formulation is based on a novel model which accurately models the shadow generation process in the umbra and penumbra regions. The model parameters are efficiently estimated using an iterative optimization procedure. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.",
"title": ""
}
] |
scidocsrr
|
4a20916cf1ff2f9e74067374f231ac8f
|
A hybrid support vector machines and logistic regression approach for forecasting intermittent demand of spare parts
|
[
{
"docid": "386cd963cf70c198b245a3251c732180",
"text": "Support vector machines (SVMs) are promising methods for the prediction of -nancial timeseries because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. This study applies SVM to predicting the stock price index. In addition, this study examines the feasibility of applying SVM in -nancial forecasting by comparing it with back-propagation neural networks and case-based reasoning. The experimental results show that SVM provides a promising alternative to stock market prediction. c © 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "700d3e2cb64624df33ef411215d073ab",
"text": "A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.",
"title": ""
}
] |
[
{
"docid": "814923f39e568d9e56da015c7bb311bf",
"text": "Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application. However, we currently lack quantitative methods for model assessment. Because of this, while many GAN variants are being proposed, we have relatively little understanding of their relative abilities. In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training. We observe consistency across the various proposed metrics and, interestingly, the test-time metrics do not favour networks that use the same training-time criterion. We also compare the proposed metrics to human perceptual scores.",
"title": ""
},
{
"docid": "2b8aa68835bc61f3d0b5da39441185c9",
"text": "This position paper explores the threat to individual privacy due to the widespread use of consumer drones. Present day consumer drones are equipped with sensors such as cameras and microphones, and their types and numbers can be well expected to increase in future. Drone operators have absolute control on where the drones fly and what the on-board sensors record with no options for bystanders to protect their privacy. This position paper proposes a policy language that allows homeowners, businesses, governments, and privacy-conscious individuals to specify location access-control for drones, and discusses how these policy-based controls might be realized in practice. This position paper also explores the potential future problem of managing consumer drone traffic that is likely to emerge with increasing use of consumer drones for various tasks. It proposes a privacy preserving traffic management protocol for directing drones towards their respective destinations without requiring drones to reveal their destinations.",
"title": ""
},
{
"docid": "d80580490ac7d968ff08c2a9ee159028",
"text": "Statistical relational AI (StarAI) aims at reasoning and learning in noisy domains described in terms of objects and relationships by combining probability with first-order logic. With huge advances in deep learning in the current years, combining deep networks with first-order logic has been the focus of several recent studies. Many of the existing attempts, however, only focus on relations and ignore object properties. The attempts that do consider object properties are limited in terms of modelling power or scalability. In this paper, we develop relational neural networks (RelNNs) by adding hidden layers to relational logistic regression (the relational counterpart of logistic regression). We learn latent properties for objects both directly and through general rules. Back-propagation is used for training these models. A modular, layer-wise architecture facilitates utilizing the techniques developed within deep learning community to our architecture. Initial experiments on eight tasks over three real-world datasets show that RelNNs are promising models for relational learning.",
"title": ""
},
{
"docid": "ca6eb17d02fd8055ea37ca66306f8bb5",
"text": "Advances in satellite imagery presents unprecedented opportunities for understanding natural and social phenomena at global and regional scales. Although the field of satellite remote sensing has evaluated imperative questions to human and environmental sustainability, scaling those techniques to very high spatial resolutions at regional scales remains a challenge. Satellite imagery is now more accessible with greater spatial, spectral and temporal resolution creating a data bottleneck in identifying the content of images. Because satellite images are unlabeled, unsupervised methods allow us to organize images into coherent groups or clusters. However, the performance of unsupervised methods, like all other machine learning methods, depends on features. Recent studies using features from pre-trained networks have shown promise for learning in new datasets. This suggests that features from pre-trained networks can be used for learning in temporally and spatially dynamic data sources such as satellite imagery. It is not clear, however, which features from which layer and network architecture should be used for learning new tasks. In this paper, we present an approach to evaluate the transferability of features from pre-trained Deep Convolutional Neural Networks for satellite imagery. We explore and evaluate different features and feature combinations extracted from various deep network architectures, and systematically evaluate over 2,000 network-layer combinations. In addition, we test the transferability of our engineered features and learned features from an unlabeled dataset to a different labeled dataset. Our feature engineering and learning are done on the unlabeled Draper Satellite Chronology dataset, and we test on the labeled UC Merced Land dataset to achieve near state-of-the-art classification results. These results suggest that even without any or minimal training, these networks can generalize well to other datasets. This method could be useful in the task of clustering unlabeled images and other unsupervised machine learning tasks.",
"title": ""
},
{
"docid": "22ab14bba18c990d2b096cb5aeaa6314",
"text": "Airport traffic consists of aircraft performing landing, takeoff and taxi procedures. It is controlled by air traffic controller (ATC). To safely perform this task he/she uses traffic surveillance equipment and voice communication systems to issue control clearances. One of the most important indicators of this process efficiency is practical airport capacity, which refers to the number of aircraft handled and delays which occurred at the same time. This paper presents the concept of airport traffic modelling using coloured, timed, stochastic Petri nets. By the example of the airport with one runway and simultaneous takeoff and landing operations, the applicability of such models in analysis of air traffic processes is shown. Simulation experiments, in which CPN Tools package was used, showed the impact of the initial formation of landing aircraft stream on airside capacity of the airport. They also showed the possibility of its increase by changes in the organisation of takeoff and landing processes.",
"title": ""
},
{
"docid": "e7bb89000329245bccdecbc80549109c",
"text": "This paper presents a tutorial overview of the use of coupling between nonadjacent resonators to produce transmission zeros at real frequencies in microwave filters. Multipath coupling diagrams are constructed and the relative phase shifts of multiple paths are observed to produce the known responses of the cascaded triplet and quadruplet sections. The same technique is also used to explore less common nested cross-coupling structures and to predict their behavior. A discussion of the effects of nonzero electrical length coupling elements is presented. Finally, a brief categorization of the various synthesis and implementation techniques available for these types of filters is given.",
"title": ""
},
{
"docid": "dcfe8e834a7726aa49ea37368ffc6ff6",
"text": "Object recognition and categorization are computationally difficult tasks that are performed effortlessly by humans. Attempts have been made to emulate the computations in different parts of the primate cortex to gain a better understanding of the cortex and to design brain–machine interfaces that speak the same language as the brain. The HMAX model proposed by Riesenhuber and Poggio and extended by Serre <etal/> attempts to truly model the visual cortex. In this paper, we provide a spike-based implementation of the HMAX model, demonstrating its ability to perform biologically-plausible MAX computations as well as classify basic shapes. The spike-based model consists of 2514 neurons and 17<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\thinspace$</tex> </formula>305 synapses (S1 Layer: 576 neurons and 7488 synapses, C1 Layer: 720 neurons and 2880 synapses, S2 Layer: 576 neurons and 1152 synapses, C2 Layer: 640 neurons and 5760 synapses, and Classifier: 2 neurons and 25 synapses). Without the limits of the retina model, it will take the system 2 min to recognize rectangles and triangles in 24<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>24 pixel images. This can be reduced to 4.8 s by rearranging the lookup table so that neurons which have similar responses to the same input(s) can be placed on the same row and affected in parallel.",
"title": ""
},
{
"docid": "60556a58af0196cc0032d7237636ec52",
"text": "This paper investigates what students understand about algorithm efficiency before receiving any formal instruction on the topic. We gave students a challenging search problem and two solutions, then asked them to identify the more efficient solution and to justify their choice. Many students did not use the standard worst-case analysis of algorithms; rather they chose other metrics, including average-case, better for more cases, better in all cases, one algorithm being more correct, and better for real-world scenarios. Students were much more likely to choose the correct algorithm when they were asked to trace the algorithms on specific examples; this was true even if they traced the algorithms incorrectly.",
"title": ""
},
{
"docid": "6fc290610e99d66248c6d9e8c4fa4f02",
"text": "Ali, M. A. 2014. Understanding Cancer Mutations by Genome Editing. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Medicine 1054. 37 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9106-2. Mutational analyses of cancer genomes have identified novel candidate cancer genes with hitherto unknown function in cancer. To enable phenotyping of mutations in such genes, we have developed a scalable technology for gene knock-in and knock-out in human somatic cells based on recombination-mediated construct generation and a computational tool to design gene targeting constructs. Using this technology, we have generated somatic cell knock-outs of the putative cancer genes ZBED6 and DIP2C in human colorectal cancer cells. In ZBED6 cells complete loss of functional ZBED6 was validated and loss of ZBED6 induced the expression of IGF2. Whole transcriptome and ChIP-seq analyses revealed relative enrichment of ZBED6 binding sites at upregulated genes as compared to downregulated genes. The functional annotation of differentially expressed genes revealed enrichment of genes related to cell cycle and cell proliferation and the transcriptional modulator ZBED6 affected the cell growth and cell cycle of human colorectal cancer cells. In DIP2Ccells, transcriptome sequencing revealed 780 differentially expressed genes as compared to their parental cells including the tumour suppressor gene CDKN2A. The DIP2C regulated genes belonged to several cancer related processes such as angiogenesis, cell structure and motility. The DIP2Ccells were enlarged and grew slower than their parental cells. To be able to directly compare the phenotypes of mutant KRAS and BRAF in colorectal cancers, we have introduced a KRAS allele in RKO BRAF cells. The expression of the mutant KRAS allele was confirmed and anchorage independent growth was restored in KRAS cells. The differentially expressed genes both in BRAF and KRAS mutant cells included ERBB, TGFB and histone modification pathways. Together, the isogenic model systems presented here can provide insights to known and novel cancer pathways and can be used for drug discovery.",
"title": ""
},
{
"docid": "26787002ed12cc73a3920f2851449c5e",
"text": "This article brings together three current themes in organizational behavior: (1) a renewed interest in assessing person-situation interactional constructs, (2) the quantitative assessment of organizational culture, and (3) the application of \"Q-sort,\" or template-matching, approaches to assessing person-situation interactions. Using longitudinal data from accountants and M.B.A. students and cross-sectional data from employees of government agencies and public accounting firms, we developed and validated an instrument for assessing personorganization fit, the Organizational Culture Profile (OCP). Results suggest that the dimensionality of individual preferences for organizational cultures and the existence of these cultures are interpretable. Further, person-organization fit predicts job satisfaction and organizational commitment a year after fit was measured and actual turnover after two years. This evidence attests to the importance of understanding the fit between individuals' preferences and organizational cultures.",
"title": ""
},
{
"docid": "5bd2a042a1309792da03577d3eaf24dc",
"text": "Movement primitives are a well established approach for encoding and executing movements. While the primitives themselves have been extensively researched, the concept of movement primitive libraries has not received similar attention. Libraries of movement primitives represent the skill set of an agent. Primitives can be queried and sequenced in order to solve specific tasks. The goal of this work is to segment unlabeled demonstrations into a representative set of primitives. Our proposed method differs from current approaches by taking advantage of the often neglected, mutual dependencies between the segments contained in the demonstrations and the primitives to be encoded. By exploiting this mutual dependency, we show that we can improve both the segmentation and the movement primitive library. Based on probabilistic inference our novel approach segments the demonstrations while learning a probabilistic representation of movement primitives. We demonstrate our method on two real robot applications. First, the robot segments sequences of different letters into a library, explaining the observed trajectories. Second, the robot segments demonstrations of a chair assembly task into a movement primitive library. The library is subsequently used to assemble the chair in an order not present in the demonstrations.",
"title": ""
},
{
"docid": "a9c00556e3531ba81cc009ae3f5a1816",
"text": "A systematic, tiered approach to assess the safety of engineered nanomaterials (ENMs) in foods is presented. The ENM is first compared to its non-nano form counterpart to determine if ENM-specific assessment is required. Of highest concern from a toxicological perspective are ENMs which have potential for systemic translocation, are insoluble or only partially soluble over time or are particulate and bio-persistent. Where ENM-specific assessment is triggered, Tier 1 screening considers the potential for translocation across biological barriers, cytotoxicity, generation of reactive oxygen species, inflammatory response, genotoxicity and general toxicity. In silico and in vitro studies, together with a sub-acute repeat-dose rodent study, could be considered for this phase. Tier 2 hazard characterisation is based on a sentinel 90-day rodent study with an extended range of endpoints, additional parameters being investigated case-by-case. Physicochemical characterisation should be performed in a range of food and biological matrices. A default assumption of 100% bioavailability of the ENM provides a 'worst case' exposure scenario, which could be refined as additional data become available. The safety testing strategy is considered applicable to variations in ENM size within the nanoscale and to new generations of ENM.",
"title": ""
},
{
"docid": "0965f1390233e71da72fbc8f37394add",
"text": "Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.",
"title": ""
},
{
"docid": "e6088779901bd4bfaf37a3a1784c3854",
"text": "There has been recently a great progress in the field of automatically generated knowledge bases and corresponding disambiguation systems that are capable of mapping text mentions onto canonical entities. Efforts like the before mentioned have enabled researchers and analysts from various disciplines to semantically “understand” contents. However, most of the approaches have been specifically designed for the English language and in particular support for Arabic is still in its infancy. Since the amount of Arabic Web contents (e.g. in social media) has been increasing dramatically over the last years, we see a great potential for endeavors that support an entity-level analytics of these data. To this end, we have developed a framework called AIDArabic that extends the existing AIDA system by additional components that allow the disambiguation of Arabic texts based on an automatically generated knowledge base distilled from Wikipedia. Even further, we overcome the still existing sparsity of the Arabic Wikipedia by exploiting the interwiki links between Arabic and English contents in Wikipedia, thus, enriching the entity catalog as well as disambiguation context.",
"title": ""
},
{
"docid": "80ae11d4c626c564023ab70b64bde846",
"text": "This paper presents the results of the study carried out for the determination of the residential, commercial and industrial consumers daily load curves based on field measurements performed by the Utilities of Electric Energy of São Paulo State, Brazil. A methodology for the aggregation of these loads to determine the expected loading in equipment or in a preset part of the distribution network by using the representative daily curves of each consumer’s activity and the monthly energy consumption of the connected consumers is also presented.",
"title": ""
},
{
"docid": "fcbb5b1adf14b443ef0d4a6f939140fe",
"text": "In this paper we make the case for IoT edge offloading, which strives to exploit the resources on edge computing devices by offloading fine-grained computation tasks from the cloud closer to the users and data generators (i.e., IoT devices). The key motive is to enhance performance, security and privacy for IoT services. Our proposal bridges the gap between cloud computing and IoT by applying a divide and conquer approach over the multi-level (cloud, edge and IoT) information pipeline. To validate the design of IoT edge offloading, we developed a unikernel-based prototype and evaluated the system under various hardware and network conditions. Our experimentation has shown promising results and revealed the limitation of existing IoT hardware and virtualization platforms, shedding light on future research of edge computing and IoT.",
"title": ""
},
{
"docid": "cec6e899c23dd65881f84cca81205eb0",
"text": "A fuzzy graph (f-graph) is a pair G : ( σ, μ) where σ is a fuzzy subset of a set S and μ is a fuzzy relation on σ. A fuzzy graph H : ( τ, υ) is called a partial fuzzy subgraph of G : (σ, μ) if τ (u) ≤ σ(u) for every u and υ (u, v) ≤ μ(u, v) for every u and v . In particular we call a partial fuzzy subgraph H : ( τ, υ) a fuzzy subgraph of G : ( σ, μ ) if τ (u) = σ(u) for every u in τ * and υ (u, v) = μ(u, v) for every arc (u, v) in υ*. A connected f-graph G : ( σ, μ) is a fuzzy tree(f-tree) if it has a fuzzy spannin g subgraph F : (σ, υ), which is a tree, where for all arcs (x, y) not i n F there exists a path from x to y in F whose strength is more than μ(x, y). A path P of length n is a sequence of disti nct nodes u0, u1, ..., un such that μ(ui−1, ui) > 0, i = 1, 2, ..., n and the degree of membershi p of a weakest arc is defined as its strength. If u 0 = un and n≥ 3, then P is called a cycle and a cycle P is called a fuzzy cycle(f-cycle) if it cont ains more than one weakest arc . The strength of connectedness between two nodes x and y is efined as the maximum of the strengths of all paths between x and y and is denot ed by CONNG(x, y). An x − y path P is called a strongest x − y path if its strength equal s CONNG(x, y). An f-graph G : ( σ, μ) is connected if for every x,y in σ ,CONNG(x, y) > 0. In this paper, we offer a survey of selected recent results on fuzzy graphs.",
"title": ""
},
{
"docid": "c84a0f630b4fb2e547451d904e1c63a5",
"text": "Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on “informative” examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the persample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.",
"title": ""
},
{
"docid": "473968c14db4b189af126936fd5486ca",
"text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.",
"title": ""
},
{
"docid": "a458f16b84f40dc0906658a93d4b2efa",
"text": "We investigated the usefulness of Sonazoid contrast-enhanced ultrasonography (Sonazoid-CEUS) in the diagnosis of hepatocellular carcinoma (HCC). The examination was performed by comparing the images during the Kupffer phase of Sonazoid-CEUS with superparamagnetic iron oxide magnetic resonance (SPIO-MRI). The subjects were 48 HCC nodules which were histologically diagnosed (well-differentiated HCC, n = 13; moderately differentiated HCC, n = 30; poorly differentiated HCC, n = 5). We performed Sonazoid-CEUS and SPIO-MRI on all subjects. In the Kupffer phase of Sonazoid-CEUS, the differences in the contrast agent uptake between the tumorous and non-tumorous areas were quantified as the Kupffer phase ratio and compared. In the SPIO-MRI, it was quantified as the SPIO-intensity index. We then compared these results with the histological differentiation of HCCs. The Kupffer phase ratio decreased as the HCCs became less differentiated (P < 0.0001; Kruskal–Wallis test). The SPIO-intensity index also decreased as HCCs became less differentiated (P < 0.0001). A positive correlation was found between the Kupffer phase ratio and the SPIO-MRI index (r = 0.839). In the Kupffer phase of Sonazoid-CEUS, all of the moderately and poorly differentiated HCCs appeared hypoechoic and were detected as a perfusion defect, whereas the majority (9 of 13 cases, 69.2%) of the well-differentiated HCCs had an isoechoic pattern. The Kupffer phase images of Sonazoid-CEUS and SPIO-MRI matched perfectly (100%) in all of the moderately and poorly differentiated HCCs. Sonazoid-CEUS is useful for estimating histological grading of HCCs. It is a modality that could potentially replace SPIO-MRI.",
"title": ""
}
] |
scidocsrr
|
ae036b2fdd01807e326000d60af3fb17
|
EGameFlow: A scale to measure learners' enjoyment of e-learning games
|
[
{
"docid": "ef8d88d57858706ba269a8f3aaa989f3",
"text": "The mid 20 century witnessed some serious attempts in studies of play and games with an emphasis on their importance within culture. Most prominently, Johan Huizinga (1944) maintained in his book Homo Ludens that the earliest stage of culture is in the form of play and that culture proceeds in the shape and the mood of play. He also claimed that some elements of play crystallised as knowledge such as folklore, poetry and philosophy as culture advanced.",
"title": ""
},
{
"docid": "ff1c33b797861cde34b8705c1136912b",
"text": "This workshop addresses current needs in the games developers' community and games industry to evaluate the overall user experience of games. New forms of interaction techniques, like gestures, eye-tracking or even bio-physiological input and feedback present the limits of current evaluation methods for user experience, and even standard usability evaluation used during game development. This workshop intends to bring together practitioners and researchers sharing their experiences using methods from HCI to explore and measure usability and user experience in games. To this workshop we also invite contributions from other disciplines (especially from the games industry) showing new concepts for user experience evaluation.",
"title": ""
}
] |
[
{
"docid": "08b2b3539a1b10f7423484946121ed50",
"text": "BACKGROUND\nCatheter ablation of persistent atrial fibrillation yields an unsatisfactorily high number of failures. The hybrid approach has recently emerged as a technique that overcomes the limitations of both surgical and catheter procedures alone.\n\n\nMETHODS AND RESULTS\nWe investigated the sequential (staged) hybrid method, which consists of a surgical thoracoscopic radiofrequency ablation procedure followed by radiofrequency catheter ablation 6 to 8 weeks later using the CARTO 3 mapping system. Fifty consecutive patients (mean age 62±7 years, 32 males) with long-standing persistent atrial fibrillation (41±34 months) and a dilated left atrium (>45 mm) were included and prospectively followed in an unblinded registry. During the electrophysiological part of the study, all 4 pulmonary veins were found to be isolated in 36 (72%) patients and a complete box-lesion was confirmed in 14 (28%) patients. All gaps were successfully re-ablated. Twelve months after the completed hybrid ablation, 47 patients (94%) were in normal sinus rhythm (4 patients with paroxysmal atrial fibrillation required propafenone and 1 patient underwent a redo catheter procedure). The majority of arrhythmias recurred during the first 3 months. Beyond 12 months, there were no arrhythmia recurrences detected. The surgical part of the procedure was complicated by 7 (13.7%) major complications, while no serious adverse events were recorded during the radiofrequency catheter part of the procedure.\n\n\nCONCLUSIONS\nThe staged hybrid epicardial-endocardial treatment of long-standing persistent atrial fibrillation seems to be extremely effective in maintenance of normal sinus rhythm compared to radiofrequency catheter or surgical ablation alone. Epicardial ablation alone cannot guarantee durable transmural lesions.\n\n\nCLINICAL TRIAL REGISTRATION\nURL: www.ablace.cz Unique identifier: cz-060520121617.",
"title": ""
},
{
"docid": "65b9bef6e27683257a67e75a51a47ea0",
"text": "This paper describes a conceptual approach to individual and organizational competencies needed for Open Innovation (OI) using a new ambidexterity model. It starts from the assumption that the entire innovation process is rarely open by all means, as the OI concept may suggest. It rather takes into consideration that in practice especially for early phases of the innovation process the organization and their innovation actors are opening up for new ways of joint ideation, collaboration etc. to gain a maximum of explorative performance and effectiveness. Though, when it comes to committing considerable resources to development and implementation activities, the innovation process usually closes step by step as efficiency criteria gain ground for a maximum of knowledge exploitation. The ambidexterity model of competences for OI refers to these tensions and provides a new framework to understand the needs of industry and Higher Education Institutes (HEI) to develop appropriate exploration and exploitation competencies for OI.",
"title": ""
},
{
"docid": "fcccb84e3a26ed0acf53bac35ae466ea",
"text": "In this paper, we introduce a vision for Semantic Web services which combines the growing Web services architecture and the Semantic Web and we will propose DAML-S as a prototypical example of an ontology for describing Semantic Web services. Furthermore, we show that DAML-S is not just an abstract description, but it can be efficiently implemented to support capability matching and to manage interaction between Web services. Specifically, we will describe the implementation of the DAML-S/UDDI Matchmaker that expands on UDDI by providing semantic capability matching, and we will present the DAML-S Virtual Machine that uses the DAML-S Process Model to manage the interaction with Web service. We will also show that the use of DAML-S does not produce a performance penalty during the normal operation of Web services. © 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b6d853e456003da0978bafd8153511ec",
"text": "Bitcoin is gaining increasing adoption and popularity nowadays. In spite of its reliance on pseudonyms, Bitcoin raises a number of privacy concerns due to the fact that all of the transactions that take place in the system are publicly announced. The literature contains a number of proposals that aim at evaluating and enhancing user privacy in Bitcoin. To the best of our knowledge, ZeroCoin (ZC) is the first proposal which prevents the public tracing of coin expenditure in Bitcoin by leveraging zero-knowledge proofs of knowledge and one-way accumulators. While ZeroCoin hardens the traceability of coins, it does not hide the amount per transaction, nor does it prevent the leakage of the balances of Bitcoin addresses. In this paper, we propose, EZC, an extension of ZeroCoin which (i) enables the construction of multi-valued ZCs whose values are only known to the sender and recipient of the transaction and (ii) supports the expenditure of ZCs among users in the Bitcoin system, without the need to convert them back to Bitcoins. By doing so, EZC hides transaction values and address balances in Bitcoin, for those users who opt-out from exchanging their coins to BTCs. We performed a preliminary assessment of the performance of EZC; our findings suggest that EZC improves the communication overhead incurred in ZeroCoin.",
"title": ""
},
{
"docid": "6b467ec8262144150b17cedb3d96edcb",
"text": "We describe a new method of measuring surface currents using an interferometric synthetic aperture radar. An airborne implementation has been tested over the San Francisco Bay near the time of maximum tidal flow, resulting in a map of the east-west component of the current. Only the line-of-sight component of velocity is measured by this technique. Where the signal-to-noise ratio was strongest, statistical fluctuations of less than 4 cm s−1 were observed for ocean patches of 60×60 m.",
"title": ""
},
{
"docid": "1cc5ab9bd552e6399c6cf5a06e0ca235",
"text": "Fake identities and Sybil accounts are pervasive in today’s online communities. They are responsible for a growing number of threats, including fake product reviews, malware and spam on social networks, and astroturf political campaigns. Unfortunately, studies show that existing tools such as CAPTCHAs and graph-based Sybil detectors have not proven to be effective defenses. In this paper, we describe our work on building a practical system for detecting fake identities using server-side clickstream models. We develop a detection approach that groups “similar” user clickstreams into behavioral clusters, by partitioning a similarity graph that captures distances between clickstream sequences. We validate our clickstream models using ground-truth traces of 16,000 real and Sybil users from Renren, a large Chinese social network with 220M users. We propose a practical detection system based on these models, and show that it provides very high detection accuracy on our clickstream traces. Finally, we worked with collaborators at Renren and LinkedIn to test our prototype on their server-side data. Following positive results, both companies have expressed strong interest in further experimentation and possible internal deployment.",
"title": ""
},
{
"docid": "5c898e311680199f1f369d3c264b2b14",
"text": "Behaviour Driven Development (BDD) has gained increasing attention as an agile development approach in recent years. However, characteristics that constituite the BDD approach are not clearly defined. In this paper, we present a set of main BDD charactersitics identified through an analysis of relevant literature and current BDD toolkits. Our study can provide a basis for understanding BDD, as well as for extending the exisiting BDD toolkits or developing new ones.",
"title": ""
},
{
"docid": "d35ff18c7d7f8f02f803a0138530fbff",
"text": "This paper presents the design and development of a novel Natural Language Interface to Database (NLIDB). The developed prototype is called Aneesah the NLIDB, which is capable of allowing users to interactively/conversely access desired information stored in a relational database. This paper introduces the novel conversational agent enabled architecture of Aneesah NLIDB and describes the scripting techniques that has been adopted for its development. The proposed framework for Aneesah NLIDB is based on pattern matching techniques implemented to converse with users, handle complexities and ambiguities for building dynamic SQL queries from multiple dialogues in order to extract database information. The preliminary evaluation results gathered following a pilot study reveal promising results. Index Terms – Natural Language Interface to Databases (NLIDB), Conversational Agents (CA), Knowledge base, Artificial Intelligence (AI), Pattern Matching (PM).",
"title": ""
},
{
"docid": "0c01132904f2c580884af1391069addd",
"text": "BACKGROUND\nThe inclusion of qualitative studies in systematic reviews poses methodological challenges. This paper presents worked examples of two methods of data synthesis (textual narrative and thematic), used in relation to one review, with the aim of enabling researchers to consider the strength of different approaches.\n\n\nMETHODS\nA systematic review of lay perspectives of infant size and growth was conducted, locating 19 studies (including both qualitative and quantitative). The data extracted from these were synthesised using both a textual narrative and a thematic synthesis.\n\n\nRESULTS\nThe processes of both methods are presented, showing a stepwise progression to the final synthesis. Both methods led us to similar conclusions about lay views toward infant size and growth. Differences between methods lie in the way they dealt with study quality and heterogeneity.\n\n\nCONCLUSION\nOn the basis of the work reported here, we consider textual narrative and thematic synthesis have strengths and weaknesses in relation to different research questions. Thematic synthesis holds most potential for hypothesis generation, but may obscure heterogeneity and quality appraisal. Textual narrative synthesis is better able to describe the scope of existing research and account for the strength of evidence, but is less good at identifying commonality.",
"title": ""
},
{
"docid": "f7c46115abe7cc204dd7dbd56f9e13c6",
"text": "Forecasting of future electricity demand is very important for decision making in power system operation and planning. In recent years, due to privatization and deregulation of the power industry, accurate electricity forecasting has become an important research area for efficient electricity production. This paper presents a time series approach for mid-term load forecasting (MTLF) in order to predict the daily peak load for the next month. The proposed method employs a computational intelligence scheme based on the self-organizing map (SOM) and support vector machine (SVM). According to the similarity degree of the time series load data, SOM is used as a clustering tool to cluster the training data into two subsets, using the Kohonen rule. As a novel machine learning technique, the support vector regression (SVR) is used to fit the testing data based on the clustered subsets, for predicting the daily peak load. Our proposed SOM-SVR load forecasting model is evaluated in MATLAB on the electricity load dataset provided by the Eastern Slovakian Electricity Corporation, which was used in the 2001 European Network on Intelligent Technologies (EUNITE) load forecasting competition. Power load data obtained from (i) Tenaga Nasional Berhad (TNB) for peninsular Malaysia and (ii) PJM for the eastern interconnection grid of the United States of America is used to benchmark the performance of our proposed model. Experimental results obtained indicate that our proposed SOM-SVR technique gives significantly good prediction accuracy for MTLF compared to previously researched findings using the EUNITE, Malaysian and PJM electricity load",
"title": ""
},
{
"docid": "82d3217331a70ead8ec3064b663de451",
"text": "The idea of computer vision as the Bayesian inverse problem to computer graphics has a long history and an appealing elegance, but it has proved difficult to directly implement. Instead, most vision tasks are approached via complex bottom-up processing pipelines. Here we show that it is possible to write short, simple probabilistic graphics programs that define flexible generative models and to automatically invert them to interpret real-world images. Generative probabilistic graphics programs consist of a stochastic scene generator, a renderer based on graphics software, a stochastic likelihood model linking the renderer’s output and the data, and latent variables that adjust the fidelity of the renderer and the tolerance of the likelihood model. Representations and algorithms from computer graphics, originally designed to produce high-quality images, are instead used as the deterministic backbone for highly approximate and stochastic generative models. This formulation combines probabilistic programming, computer graphics, and approximate Bayesian computation, and depends only on general-purpose, automatic inference techniques. We describe two applications: reading sequences of degraded and adversarially obscured alphanumeric characters, and inferring 3D road models from vehicle-mounted camera images. Each of the probabilistic graphics programs we present relies on under 20 lines of probabilistic code, and supports accurate, approximately Bayesian inferences about ambiguous real-world images.",
"title": ""
},
{
"docid": "e1d9ff28da38fcf8ea3a428e7990af25",
"text": "The Autonomous car is a complex topic, different technical fields like: Automotive engineering, Control engineering, Informatics, Artificial Intelligence etc. are involved in solving the human driver replacement with an artificial (agent) driver. The problem is even more complicated because usually, nowadays, having and driving a car defines our lifestyle. This means that the mentioned (major) transformation is also a cultural issue. The paper will start with the mentioned cultural aspects related to a self-driving car and will continue with the big picture of the system.",
"title": ""
},
{
"docid": "350daaeb965ac6a1383ec96f4d34e0ba",
"text": "This paper proposes a new automatic approach for the detection of SQL Injection and XPath Injection vulnerabilities, two of the most common and most critical types of vulnerabilities in web services. Although there are tools that allow testing web applications against security vulnerabilities, previous research shows that the effectiveness of those tools in web services environments is very poor. In our approach a representative workload is used to exercise the web service and a large set of SQL/XPath Injection attacks are applied to disclose vulnerabilities. Vulnerabilities are detected by comparing the structure of the SQL/XPath commands issued in the presence of attacks to the ones previously learned when running the workload in the absence of attacks. Experimental evaluation shows that our approach performs much better than known tools (including commercial ones), achieving extremely high detection coverage while maintaining the false positives rate very low.",
"title": ""
},
{
"docid": "834a5cb9f2948443fbb48f274e02ca9c",
"text": "The Carnegie Mellon Communicator is a telephone-based dialog system that supports planning in a travel domain. The implementation of such a system requires two complimentary components, an architecture capable of managing interaction and the task, as well as a knowledge base that captures the speech, language and task characteristics specific to the domain. Given a suitable architecture, the principal effort in development in taken up in the acquisition and processing of a domain knowledge base. This paper describes a variety of techniques we have applied to modeling in acoustic, language, task, generation and synthesis components of the system.",
"title": ""
},
{
"docid": "6d61da17db5c16611409356bd79006c4",
"text": "We examine empirical evidence for religious prosociality, the hypothesis that religions facilitate costly behaviors that benefit other people. Although sociological surveys reveal an association between self-reports of religiosity and prosociality, experiments measuring religiosity and actual prosocial behavior suggest that this association emerges primarily in contexts where reputational concerns are heightened. Experimentally induced religious thoughts reduce rates of cheating and increase altruistic behavior among anonymous strangers. Experiments demonstrate an association between apparent profession of religious devotion and greater trust. Cross-cultural evidence suggests an association between the cultural presence of morally concerned deities and large group size in humans. We synthesize converging evidence from various fields for religious prosociality, address its specific boundary conditions, and point to unresolved questions and novel predictions.",
"title": ""
},
{
"docid": "3fa0be0d8075e68b5344fe85d37c7dee",
"text": "We develop a structural model for the co-evolution of individuals’ friendship tie formations and their concurrent online activities (product adoptions and production of user-generated content) within a social network. Explicitly modeling the endogenous formation of the network and accounting for the interdependence between decisions in these two areas (friendship formations and concurrent online activities) provides a clean identification of peer effects and of important drivers of individuals’ friendship decisions. We estimate our model using a novel data set capturing the continuous development of a network and users’ entire action histories within the network. Our results reveal that, compared to a potential friend’s product adoptions and content generation activities, the total number of friends and the number of common friends this potential friend has with the focal individual are the most important drivers of friendship formation. Further, while having more friends does not make a person more active, having more active friends does increase a user’s activity levels in terms of both product adoptions and content generation through peer effects. Via counterfactuals we assess the effectiveness of various seeding and stimulation strategies in increasing website traffic while taking the endogenous network formation into account. We find that seeding to users with the most friends is not always the best strategy to increase users’ activity levels on the website.",
"title": ""
},
{
"docid": "8aae828a75eb83192e7ac9850f70e7ff",
"text": "Over the past decade, goal models have been used in Computer Science in order to represent software requirements, business objectives and design qualities. Such models extend traditional AI planning techniques for representing goals by allowing for partially defined and possibly inconsistent goals. This paper presents a formal framework for reasoning with such goal models. In particular, the paper proposes a qualitative and a numerical axiomatization for goal modeling primitives and introduces label propagation algorithms that are shown to be sound and complete with respect to their respective axiomatizations. In addition, the paper reports on experimental results on the propagation algorithms applied to a goal model for a US car manufacturer.",
"title": ""
},
{
"docid": "27ba6cfdebdedc58ab44b75a15bbca05",
"text": "OBJECTIVES\nTo assess the influence of material/technique selection (direct vs. CAD/CAM inlays) for large MOD composite adhesive restorations and its effect on the crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slot-type tooth preparation was applied to 32 extracted maxillary molars (5mm depth and 5mm bucco-palatal width) including immediately sealed dentin for the inlay group. Fifteen teeth were restored with direct composite resin restoration (Miris2) and 17 teeth received milled inlays using Paradigm MZ100 block in the CEREC machine. All inlays were adhesively luted with a light curing composite resin (Filtek Z100). Enamel shrinkage-induced cracks were tracked with photography and transillumination. Cyclic isometric chewing (5 Hz) was simulated, starting with a load of 200 N (5000 cycles), followed by stages of 400, 600, 800, 1000, 1200 and 1400 N at a maximum of 30,000 cycles each. Samples were loaded until fracture or to a maximum of 185,000 cycles.\n\n\nRESULTS\nTeeth restored with the direct technique fractured at an average load of 1213 N and two of them withstood all loading cycles (survival=13%); with inlays, the survival rate was 100%. Most failures with Miris2 occurred above the CEJ and were re-restorable (67%), but generated more shrinkage-induced cracks (47% of the specimen vs. 7% for inlays).\n\n\nSIGNIFICANCE\nCAD/CAM MZ100 inlays increased the accelerated fatigue resistance and decreased the crack propensity of large MOD restorations when compared to direct restorations. While both restorative techniques yielded excellent fatigue results at physiological masticatory loads, CAD/CAM inlays seem more indicated for high-load patients.",
"title": ""
},
{
"docid": "aa2af8bd2ef74a0b5fa463a373a4c049",
"text": "What modern game theorists describe as “fictitious play” is not the learning process George W. Brown defined in his 1951 paper. Brown’s original version differs in a subtle detail, namely the order of belief updating. In this note we revive Brown’s original fictitious play process and demonstrate that this seemingly innocent detail allows for an extremely simple and intuitive proof of convergence in an interesting and large class of games: nondegenerate ordinal potential games. © 2006 Elsevier Inc. All rights reserved. JEL classification: C72",
"title": ""
}
] |
scidocsrr
|
608e65df2387725640588e9912acd554
|
Speeding up Semantic Segmentation for Autonomous Driving
|
[
{
"docid": "35625f248c81ebb5c20151147483f3f6",
"text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.",
"title": ""
}
] |
[
{
"docid": "3392de95bfc0e16776550b2a0a8fa00e",
"text": "This paper presents a new type of three-phase voltage source inverter (VSI), called three-phase dual-buck inverter. The proposed inverter does not need dead time, and thus avoids the shoot-through problems of traditional VSIs, and leads to greatly enhanced system reliability. Though it is still a hard-switching inverter, the topology allows the use of power MOSFETs as the active devices instead of IGBTs typically employed by traditional hard-switching VSIs. As a result, the inverter has the benefit of lower switching loss, and it can be designed at higher switching frequency to reduce current ripple and the size of passive components. A unified pulsewidth modulation (PWM) is introduced to reduce computational burden in real-time implementation. Different PWM methods were applied to a three-phase dual-buck inverter, including sinusoidal PWM (SPWM), space vector PWM (SVPWM) and discontinuous space vector PWM (DSVPWM). A 2.5 kW prototype of a three-phase dual-buck inverter and its control system has been designed and tested under different dc bus voltage and modulation index conditions to verify the feasibility of the circuit, the effectiveness of the controller, and to compare the features of different PWMs. Efficiency measurement of different PWMs has been conducted, and the inverter sees peak efficiency of 98.8% with DSVPWM.",
"title": ""
},
{
"docid": "4a51fa781609c0fab79fff536a14aa43",
"text": "Recently end-to-end speech recognition has obtained much attention. One of the popular models to achieve end-to-end speech recognition is attention based encoder-decoder model, which usually generating output sequences iteratively by attending the whole representations of the input sequences. However, predicting outputs until receiving the whole input sequence is not practical for online or low time latency speech recognition. In this paper, we present a simple but effective attention mechanism which can make the encoder-decoder model generate outputs without attending the entire input sequence and can apply to online speech recognition. At each prediction step, the attention is assumed to be a time-moving gaussian window with variable size and can be predicted by using previous input and output information instead of the content based computation on the whole input sequence. To further improve the online performance of the model, we employ deep convolutional neural networks as encoder. Experiments show that the gaussian prediction based attention works well and under the help of deep convolutional neural networks the online model achieves 19.5% phoneme error rate in TIMIT ASR task.",
"title": ""
},
{
"docid": "bc92aa05e989ead172274b4558aa4443",
"text": "A recent video coding standard, called High Efficiency Video Coding (HEVC), adopts two in-loop filters for coding efficiency improvement where the in-loop filtering is done by a de-blocking filter (DF) followed by sample adaptive offset (SAO) filtering. The DF helps improve both coding efficiency and subjective quality without signaling any bit to decoder sides while SAO filtering corrects the quantization errors by sending offset values to decoders. In this paper, we first present a new in-loop filtering technique using convolutional neural networks (CNN), called IFCNN, for coding efficiency and subjective visual quality improvement. The IFCNN does not require signaling bits by using the same trained weights in both encoders and decoder. The proposed IFCNN is trained in two different QP ranges: QR1 from QP = 20 to QP = 29; and QR2 from QP = 30 to QP = 39. In testing, the IFCNN trained in QR1 is applied for the encoding/decoding with QP values less than 30 while the IFCNN trained in QR2 is applied for the case of QP values greater than 29. The experiment results show that the proposed IFCNN outperforms the HEVC reference mode (HM) with average 1.9%-2.8% gain in BD-rate for Low Delay configuration, and average 1.6%-2.6% gain in BD-rate for Random Access configuration with IDR period 16.",
"title": ""
},
{
"docid": "e0223a5563e107308c88a43df5b1c8ba",
"text": "One question central to Reinforcement Learning is how to learn a feature representation that supports algorithm scaling and re-use of learned information from different tasks. Successor Features approach this problem by learning a feature representation that satisfies a temporal constraint. We present an implementation of an approach that decouples the feature representation from the reward function, making it suitable for transferring knowledge between domains. We then assess the advantages and limitations of using Successor Features for transfer.",
"title": ""
},
{
"docid": "b14ce16f81bf19c2e3ae1120b42f14c0",
"text": "Most robotic grasping tasks assume a stationary or fixed object. In this paper, we explore the requirements for tracking and grasping a moving object. The focus of our work is to achieve a high level of interaction between a real-time vision system capable of tracking moving objects in 3-D and a robot arm with gripper that can be used to pick up a moving object. There is an interest in exploring the interplay of hand-eye coordination for dynamic grasping tasks such as grasping of parts on a moving conveyor system, assembly of articulated parts, or for grasping from a mobile robotic system. Coordination between an organism's sensing modalities and motor control system is a hallmark of intelligent behavior, and we are pursuing the goal of building an integrated sensing and actuation system that can operate in dynamic as opposed to static environments. The system we have built addresses three distinct problems in robotic hand-eye coordination for grasping moving objects: fast computation of 3-D motion parameters from vision, predictive control of a moving robotic arm to track a moving object, and interception and grasping. The system is able to operate at approximately human arm movement rates, and experimental results in which a moving model train is tracked is presented, stably grasped, and picked up by the system. The algorithms we have developed that relate sensing to actuation are quite general and applicable to a variety of complex robotic tasks that require visual feedback for arm and hand control.",
"title": ""
},
{
"docid": "bd1ab7a30b4478a6320e5cad4698c2b4",
"text": "Corresponding Author: Jing Wang Boston University, Boston, MA, USA Email: [email protected] Abstract: Non-inferiority of a diagnostic test to the standard is a common issue in medical research. For instance, we may be interested in determining if a new diagnostic test is noninferior to the standard reference test because the new test might be inexpensive to the extent that some small inferior margin in sensitivity or specificity may be acceptable. Noninferiority trials are also found to be useful in clinical trials, such as image studies, where the data are collected in pairs. Conventional noninferiority trials for paired binary data are designed with a fixed sample size and no interim analysis is allowed. Adaptive design which allows for interim modifications of the trial becomes very popular in recent years and are widely used in clinical trials because of its efficiency. However, to our knowledge there is no adaptive design method available for noninferiority trial with paired binary data. In this study, we developed an adaptive design method for non-inferiority trials with paired binary data, which can also be used for superiority trials when the noninferiority margin is set to zero. We included a trial example and provided the SAS program for the design simulations.",
"title": ""
},
{
"docid": "9a05c95de1484df50a5540b31df1a010",
"text": "Resumen. Este trabajo trata sobre un sistema de monitoreo remoto a través de una pantalla inteligente para sensores de temperatura y corriente utilizando una red híbrida CAN−ZIGBEE. El CAN bus es usado como medio de transmisión de datos a corta distancia mientras que Zigbee es empleado para que cada nodo de la red pueda interactuar de manera inalámbrica con el nodo principal. De esta manera la red híbrida combina las ventajas de cada protocolo de comunicación para intercambiar datos. El sistema cuenta con cuatro nodos, dos son CAN y reciben la información de los sensores y el resto son Zigbee. Estos nodos están a cargo de transmitir la información de un nodo CAN de manera inalámbrica y desplegarla en una pantalla inteligente.",
"title": ""
},
{
"docid": "cf52d720512c316dc25f8167d5571162",
"text": "BACKGROUND\nHidradenitis suppurativa (HS) is a chronic relapsing skin disease. Recent studies have shown promising results of anti-tumor necrosis factor-alpha treatment.\n\n\nOBJECTIVE\nTo compare the efficacy and safety of infliximab and adalimumab in the treatment of HS.\n\n\nMETHODS\nA retrospective study was performed to compare 2 cohorts of 10 adult patients suffering from severe, recalcitrant HS. In 2005, 10 patients were treated with infliximab intravenous (i.v.) (3 infusions of 5 mg/kg at weeks 0, 2, and 6). In 2009, 10 other patients were treated in the same hospital with adalimumab subcutaneous (s.c.) 40 mg every other week. Both cohorts were followed up for 1 year using identical evaluation methods [Sartorius score, quality of life index, reduction of erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP), patient and doctor global assessment, and duration of efficacy].\n\n\nRESULTS\nNineteen patients completed the study. In both groups, the severity of the HS diminished. Infliximab performed better in all aspects. The average Sartorius score was reduced to 54% of baseline for the infliximab group and 66% of baseline for the adalimumab group.\n\n\nCONCLUSIONS\nAdalimumab s.c. 40 mg every other week is less effective than infliximab i.v. 5 mg/kg at weeks 0, 2, and 6.",
"title": ""
},
{
"docid": "d22c69d0c546dfb4ee5d38349bf7154f",
"text": "Investigation of functional brain connectivity patterns using functional MRI has received significant interest in the neuroimaging domain. Brain functional connectivity alterations have widely been exploited for diagnosis and prediction of various brain disorders. Over the last several years, the research community has made tremendous advancements in constructing brain functional connectivity from timeseries functional MRI signals using computational methods. However, even modern machine learning techniques rely on conventional correlation and distance measures as a basic step towards the calculation of the functional connectivity. Such measures might not be able to capture the latent characteristics of raw time-series signals. To overcome this shortcoming, we propose a novel convolutional neural network based model, FCNet, that extracts functional connectivity directly from raw fMRI time-series signals. The FCNet consists of a convolutional neural network that extracts features from time-series signals and a fully connected network that computes the similarity between the extracted features in a Siamese architecture. The functional connectivity computed using FCNet is combined with phenotypic information and used to classify individuals as healthy controls or neurological disorder subjects. Experimental results on the publicly available ADHD-200 dataset demonstrate that this innovative framework can improve classification accuracy, which indicates that the features learnt from FCNet have superior discriminative power.",
"title": ""
},
{
"docid": "ad3437a7458e9152f3eb451e5c1af10f",
"text": "In recent years the number of academic publication increased strongly. As this information flood grows, it becomes more difficult for researchers to find relevant literature effectively. To overcome this difficulty, recommendation systems can be used which often utilize text similarity to find related documents. To improve those systems we add scientometrics as a ranking measure for popularity into these algorithms. In this paper we analyse whether and how scientometrics are useful in a recommender system.",
"title": ""
},
{
"docid": "97841476457ac6599e005367d1ffc5b9",
"text": "Robust vigilance estimation during driving is very crucial in preventing traffic accidents. Many approaches have been proposed for vigilance estimation. However, most of the approaches require collecting subject-specific labeled data for calibration which is high-cost for real-world applications. To solve this problem, domain adaptation methods can be used to align distributions of source subject features (source domain) and new subject features (target domain). By reusing existing data from other subjects, no labeled data of new subjects is required to train models. In this paper, our goal is to apply adversarial domain adaptation networks to cross-subject vigilance estimation. We adopt two kinds of recently proposed adversarial domain adaptation networks and compare their performance with those of several traditional domain adaptation methods and the baseline without domain adaptation. A publicly available dataset, SEED-VIG, is used to evaluate the methods. The dataset includes electroencephalography (EEG) and electrooculography (EOG) signals, as well as the corresponding vigilance level annotations during simulated driving. Compared with the baseline, both adversarial domain adaptation networks achieve improvements over 10% in terms of Pearson’s correlation coefficient. In addition, both methods considerably outperform the traditional domain adaptation methods.",
"title": ""
},
{
"docid": "a49962a29221a26df3d7c4ef9034d61a",
"text": "In this paper we discuss the evolution of mobility management mechanisms in mobile networks. We emphasize problems with current mobility management approaches in case of very high dense and heterogeneous networks. The main contribution of the paper is a discussion on how the Software-Defined Networking (SDN) technology can be applied in mobile networks in order to efficiently handle mobility in the context of future mobile networks (5G) or evolved LTE. The discussion addresses the most important problems related to mobility management like preservation of session continuity and scalability of handovers in very dense mobile networks. Three variants of SDN usage in order to handle mobility are described and compared in this paper. The most advanced of these variants shows how mobility management mechanisms can be easily integrated with autonomie management mechanisms, providing much more advanced functionality than is provided now by the SON approach. Such mechanisms increase robustness of the handover and optimize the usage of wireless and wired mobile network resources.",
"title": ""
},
{
"docid": "65ac52564041b0c2e173560d49ec762f",
"text": "Constructionism can be a powerful framework for teaching complex content to novices. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn this content in contextualized, personally-meaningful ways. In this paper, we investigate the relevance of a set of approaches broadly called “educational data mining” or “learning analytics” (henceforth, EDM) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. We suggest that EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition but also to wider communities. Finally, we explore potential collaborations between researchers in the EDM and constructionist traditions; such collaborations have the potential to enhance the ability of constructionist researchers to make rich inference about learning and learners, while providing EDM researchers with many interesting new research questions and challenges. In recent years, project-based, student-centered approaches to education have gained prominence, due in part to an increased demand for higher-level skills in the job market (Levi and Murname, 2004), positive research findings on the effectiveness of such approaches (Barron, Pearson, et al., 2008), and a broader acceptance in public policy circles, as shown, for example, by the Next Generation Science Standards (NGSS Lead States, 2013). While several approaches for this type of learning exist, Constructionism is one of the most popular and well-developed ones (Papert, 1980). In this paper, we investigate the relevance of a set of approaches called “educational data mining” or “learning analytics” (henceforth abbreviated as ‘EDM’) (R. Baker & Yacef, 2009; Romero & Ventura, 2010a; R. Baker & Siemens, in press) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. As such, EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition and to the wider community of learning scientists and policymakers. EDM, broadly, is a set of methods that apply data mining and machine learning techniques such as prediction, classification, and discovery of latent structural regularities to rich, voluminous, and idiosyncratic educational data, potentially similar to those data generated by many constructionist learning environments which allows students to explore and build their own artifacts, computer programs, and media pieces. As such, we identify four axes in which EDM methods may be helpful for constructionist research: 1. EDM methods do not require constructionists to abandon deep qualitative analysis for simplistic summative or confirmatory quantitative analysis; 2. EDM methods can generate different and complementary new analyses to support qualitative research; 3. By enabling precise formative assessments of complex constructs, EDM methods can support an increase in methodological rigor and replicability; 4. EDM can be used to present comprehensible and actionable data to learners and teachers in situ. In order to investigate those axes, we start by describing our perspective on compatibilities and incompatibilities between constructionism and EDM. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn that complex content in connected, meaningful ways. Constructionist projects often emphasize making those artifacts (and often data) public, socially relevant, and personally meaningful to learners, and encourage working in social spaces such that learners engage each other to accelerate the learning process. diSessa and Cobb (2004) argue that constructionism serves a framework for action, as it describes its own praxis (i.e., how it matches theory to practice). The learning theory supporting constructionism is classically constructivist, combining concepts from Piaget and Vygotsky (Fosnot, 2005). As constructionism matures as a constructivist framework for action and expands in scale, constructionist projects are becoming both more complex (Reynolds & Caperton, 2011), more scalable (Resnick, Maloney, et al., 2009), and more affordable for schools following significant development in low cost “construction” technologies such as robotics and 3D printers. As such, there have been increasing opportunities to learn more about how students learn in constructionist contexts, advancing the science of learning. These discoveries will have the potential to improve the quality of all constructivist learning experiences. For example, Wilensky and Reisman (2006) have shown how constructionist modeling and simulation can make science learning more accessible, Resnick (1998) has shown how constructionism can reframe programming as art at scale, Buechley & Eisenberg (2008) have used e-textiles to engage female students in robotics, Eisenberg (2011) and Blikstein (2013, 2014) use constructionist digital fabrication to successfully teach programming, engineering, and electronics in a novel, integrated way. The findings of these research and design projects have the potential to be useful to a wide external community of teachers, researchers, practitioners, and other stakeholders. However, connecting findings from the constructionist tradition to the goals of policymakers can be challenging, due to the historical differences in methodology and values between these communities. The resources needed to study such interventions at scale are considerable, given the need to carefully document, code, and analyze each student’s work processes and artifacts. The designs of constructionist research often result in findings that do not map to what researchers, outside interests, and policymakers are expecting, in contrast to conventional controlled studies, which are designed to (more conclusively) answer a limited set of sharply targeted research questions. Due the lack of a common ground to discuss benefits and scalability of constructionist and project-based designs, these designs have been too frequently sidelined to niche institutions such as private schools, museums, or atypical public schools. To understand what the role EDM methods can play in constructionist research, we must frame what we mean by constructionist research more precisely. We follow Papert and Harel (1991) in their situating of constructionism, but they do not constrain the term to one formal definition. The definition is further complicated by the fact that constructionism has many overlaps with other research and design traditions, such as constructivism and socio-constructivism themselves, as well as project-based pedagogies and inquiry-based designs. However, we believe that it is possible to define the subset of constructionism amenable to EDM, a focus we adopt in this article for brevity. In this paper, we focus on the constructionist literature dealing with students learning to construct understandings by constructing (physical or virtual) artifacts, where the students' learning environments are designed and constrained such that building artifacts in/with that environment is designed to help students construct their own understandings. In other words, we are focusing on creative work done in computational environments designed to foster creative and transformational learning, such as NetLogo (Wilensky, 1999), Scratch (Resnick, Maloney, et al., 2009), or LEGO Mindstorms. This sub-category of constructionism can and does generate considerable formative and summative data. It also has the benefit of having a history of success in the classroom. From Papert’s seminal (1972) work through today, constructionist learning has been shown to promote the development of deep understanding of relatively complex content, with many examples ranging from mathematics (Harel, 1990; Wilensky, 1996) to history (Zahn, Krauskopf, Hesse, & Pea, 2010). However, constructionist learning environments, ideas, and findings have yet to reach the majority of classrooms and have had incomplete influence in the broader education research community. There are several potential reasons for this. One of them may be a lack of demonstration that findings are generalizable across populations and across specific content. Another reason is that constructionist activities are seen to be timeconsuming for teachers (Warschauer & Matuchniak, 2010), though, in practice, it has been shown that supporting understanding through project-based work could actually save time (Fosnot, 2005) and enable classroom dynamics that may streamline class preparation (e.g., peer teaching or peer feedback). A last reason is that constructionists almost universally value more deep understanding of scientific principles than facts or procedural skills even in contexts (e.g., many classrooms) in which memorization of facts and procedural skills is the target to be evaluated (Abelson & diSessa, 1986; Papert & Harel, 1991). Therefore, much of what is learned in constructionist environments does not directly translate to test scores or other established metrics. Constructionist research can be useful and convincing to audiences that do not yet take full advantage of the scientific findings of this community, but it requires careful consideration of framing and evidence to reach them. Educational data mining methods pose the potential to both enhance constructionist research, and to support constructionist researchers in communicating their findings in a fashion that other researchers consider valid. Blikstein (2011, p. 110) made ",
"title": ""
},
{
"docid": "3e60194e452e0e7a478d7c5f563eaa13",
"text": "The use of data stored in transaction logs of Web search engines, Intranets, and Web sites can provide valuable insight into understanding the information-searching process of online searchers. This understanding can enlighten information system design, interface development, and devising the information architecture for content collections. This article presents a review and foundation for conducting Web search transaction log analysis. A methodology is outlined consisting of three stages, which are collection, preparation, and analysis. The three stages of the methodology are presented in detail with discussions of goals, metrics, and processes at each stage. Critical terms in transaction log analysis for Web searching are defined. The strengths and limitations of transaction log analysis as a research method are presented. An application to log client-side interactions that supplements transaction logs is reported on, and the application is made available for use by the research community. Suggestions are provided on ways to leverage the strengths of, while addressing the limitations of, transaction log analysis for Web-searching research. Finally, a complete flat text transaction log from a commercial search engine is available as supplementary material with this manuscript. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "f0d85230b2a6a14f9b291a9e08a29787",
"text": "In this paper, we propose a Computer Assisted Diagnosis (CAD) system based on a deep Convolutional Neural Network (CNN) model, to build an end-to-end learning process that classifies breast mass lesions. We investigate the impact that has transfer learning when large data is scarce, and explore the proper way to fine-tune the layers to learn features that are more specific to the new data. The proposed approach showed better performance compared to other proposals that classified the same dataset. 1 Background and objectives Breast cancer is the most common invasive disease among women [Siegel et al., 2014] Optimistically, an early diagnosis of the disease increases the chances of recovery dramatically and as such, makes the early detection crucial. Mammography is the recommended screening technique, but it is not enough, we also need the radiologist expertise to check the mammograms for lesions and give a diagnosis, which can be a very challenging task[Kerlikowske et al., 2000]. Radiologists often resort to biopsies and this ends up adding exorbitant expenses to an already burdened patient and health care system [Sickles, 1991]. We propose a Computer Assisted Diagnosis (CAD) system, based on a deep Convolutional Neural Network (CNN) model, designed to be used as a “second-opinion” to help the radiologist give more accurate diagnoses. Deep Learning requires large datasets to train networks of a certain depth from scratch, which are lacking in the medical domain especially for breast cancer. Transfer learning proved to be efficient to deal with little data, even if the knowledge transfer is between two very different domains [Shin et al., 2016]. But still using the technique can be tricky, especially with medical datasets that tend to be unbalanced and limited. And when using the state-of-the art CNNs which are very deep, the models are highly inclined to suffer from overfitting even with the use of many tricks like data augmentation, regularization and dropout. The number of layers to fine-tune and the optimization strategy play a substantial role on the overall performance [Yosinski et al., 2014]. This raises few questions: • Is Transfer Learning really beneficial for this application? • How can we avoid overfitting with our small dataset ? • How much fine-tuning do we need? and what is the proper way to do it? 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ar X iv :1 71 1. 10 75 2v 1 [ cs .C V ] 2 9 N ov 2 01 7 We investigate the proper way to perform transfer learning and fine-tuning, which will allow us to take advantage of the pre-trained weights and adapt them to our task of interest. We empirically analyze the impact of the fine-tuned fraction on the final results, and we propose to use an exponentially decaying learning rate to customize all the pre-trained weights from ImageNet[Deng et al., 2009] and make them more suited to our type of data. The best model can be used as a baseline to predict if a new “never-seen” breast mass lesion is benign or malignant.",
"title": ""
},
{
"docid": "40d8c7f1d24ef74fa34be7e557dca920",
"text": "the rapid changing Internet environment has formed a competitive business setting, which provides opportunities for conducting businesses online. Availability of online transaction systems enable users to buy and make payment for products and services using the Internet platform. Thus, customers’ involvements in online purchasing have become an important trend. However, since the market is comprised of many different people and cultures, with diverse viewpoints, e-commerce businesses are being challenged by the reality of complex behavior of consumers. Therefore, it is vital to identify the factors that affect consumers purchasing decision through e-commerce in respective cultures and societies. In response to this claim, the purpose of this study is to explore the factors affecting customers’ purchasing decision through e-commerce (online shopping). Several factors such as trust, satisfaction, return policy, cash on delivery, after sale service, cash back warranty, business reputation, social and individual attitude, are considered. At this stage, the factors mentioned above, which are commonly considered influencing purchasing decision through online shopping in literature, are hypothesized to measure the causal relationship within the framework.",
"title": ""
},
{
"docid": "719458301e92f1c5141971ea8a21342b",
"text": "In the 65 years since its formal specification, information theory has become an established statistical paradigm, providing powerful tools for quantifying probabilistic relationships. Behavior analysis has begun to adopt these tools as a novel means of measuring the interrelations between behavior, stimuli, and contingent outcomes. This approach holds great promise for making more precise determinations about the causes of behavior and the forms in which conditioning may be encoded by organisms. In addition to providing an introduction to the basics of information theory, we review some of the ways that information theory has informed the studies of Pavlovian conditioning, operant conditioning, and behavioral neuroscience. In addition to enriching each of these empirical domains, information theory has the potential to act as a common statistical framework by which results from different domains may be integrated, compared, and ultimately unified.",
"title": ""
},
{
"docid": "5d3a0b1dfdbffbd4465ad7a9bb2f6878",
"text": "The Cancer Genome Atlas (TCGA) is a public funded project that aims to catalogue and discover major cancer-causing genomic alterations to create a comprehensive \"atlas\" of cancer genomic profiles. So far, TCGA researchers have analysed large cohorts of over 30 human tumours through large-scale genome sequencing and integrated multi-dimensional analyses. Studies of individual cancer types, as well as comprehensive pan-cancer analyses have extended current knowledge of tumorigenesis. A major goal of the project was to provide publicly available datasets to help improve diagnostic methods, treatment standards, and finally to prevent cancer. This review discusses the current status of TCGA Research Network structure, purpose, and achievements.",
"title": ""
},
{
"docid": "aa0335bc5090796453d7efdc247bb477",
"text": "Understanding signature complexity has been shown to be a crucial facet for both forensic and biometric appbcations. The signature complexity can be defined as the difficulty that forgers have when imitating the dynamics (constructional aspects) of other users signatures. Knowledge of complexity along with others facets such stability and signature length can lead to more robust and secure automatic signature verification systems. The work presented in this paper investigates the creation of a novel mathematical model for the automatic assessment of the signature complexity, analysing a wider set of dynamic signature features and also incorporating a new layer of detail, investigating the complexity of individual signature strokes. To demonstrate the effectiveness of the model this work will attempt to reproduce the signature complexity assessment made by experienced FDEs on a dataset of 150 signature samples.",
"title": ""
},
{
"docid": "5f67840ff6a168c8609a20504e0bd19a",
"text": "The core motor symptoms of Parkinson's disease (PD) are attributable to the degeneration of dopaminergic neurons in the substantia nigra pars compacta (SNc). Mitochondrial oxidant stress is widely viewed a major factor in PD pathogenesis. Previous work has shown that activity-dependent calcium entry through L-type channels elevates perinuclear mitochondrial oxidant stress in SNc dopaminergic neurons, providing a potential basis for their selective vulnerability. What is less clear is whether this physiological stress is present in dendrites and if Lewy bodies, the major neuropathological lesion found in PD brains, exacerbate it. To pursue these questions, mesencephalic dopaminergic neurons derived from C57BL/6 transgenic mice were studied in primary cultures, allowing for visualization of soma and dendrites simultaneously. Many of the key features of in vivo adult dopaminergic neurons were recapitulated in vitro. Activity-dependent calcium entry through L-type channels increased mitochondrial oxidant stress in dendrites. This stress progressively increased with distance from the soma. Examination of SNc dopaminergic neurons ex vivo in brain slices verified this pattern. Moreover, the formation of intracellular α-synuclein Lewy-body-like aggregates increased mitochondrial oxidant stress in perinuclear and dendritic compartments. This stress appeared to be extramitochondrial in origin, because scavengers of cytosolic reactive oxygen species or inhibition of NADPH oxidase attenuated it. These results show that physiological and proteostatic stress can be additive in the soma and dendrites of vulnerable dopaminergic neurons, providing new insight into the factors underlying PD pathogenesis.",
"title": ""
}
] |
scidocsrr
|
fe654cb752b04fc399c6607f448f1551
|
Do They All Look the Same? Deciphering Chinese, Japanese and Koreans by Fine-Grained Deep Learning
|
[
{
"docid": "48f784f6fe073c55efbc990b2a2257c6",
"text": "Faces convey a wealth of social signals, including race, expression, identity, age and gender, all of which have attracted increasing attention from multi-disciplinary research, such as psychology, neuroscience, computer science, to name a few. Gleaned from recent advances in computer vision, computer graphics, and machine learning, computational intelligence based racial face analysis has been particularly popular due to its significant potential and broader impacts in extensive real-world applications, such as security and defense, surveillance, human computer interface (HCI), biometric-based identification, among others. These studies raise an important question: How implicit, non-declarative racial category can be conceptually modeled and quantitatively inferred from the face? Nevertheless, race classification is challenging due to its ambiguity and complexity depending on context and criteria. To address this challenge, recently, significant efforts have been reported toward race detection and categorization in the community. This survey provides a comprehensive and critical review of the state-of-the-art advances in face-race perception, principles, algorithms, and applications. We first discuss race perception problem formulation and motivation, while highlighting the conceptual potentials of racial face processing. Next, taxonomy of feature representational models, algorithms, performance and racial databases are presented with systematic discussions within the unified learning scenario. Finally, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potentially important cross-cutting themes and research directions for the issue of learning race from face.",
"title": ""
},
{
"docid": "225204d66c371372debb3bb2a37c795b",
"text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.",
"title": ""
}
] |
[
{
"docid": "d422afa99137d5e09bd47edeb770e872",
"text": "OBJECTIVE\nFood Insecurity (FI) occurs in 21% of families with children and adolescents in the United States, but the potential developmental and behavioral implications of this prevalent social determinant of health have not been comprehensively elucidated. This systematic review aims to examine the association between FI and childhood developmental and behavioral outcomes in western industrialized countries.\n\n\nMETHOD\nThis review provides a critical summary of 23 peer reviewed articles from developed countries on the associations between FI and adverse childhood developmental behavioral outcomes including early cognitive development, academic performance, inattention, externalizing behaviors, and depression in 4 groups-infants and toddlers, preschoolers, school age, and adolescents. Various approaches to measuring food insecurity are delineated. Potential confounding and mediating variables of this association are compared across studies. Alternate explanatory mechanisms of observed effects and need for further research are discussed.\n\n\nRESULTS\nThis review demonstrates that household FI, even at marginal levels, is associated with children's behavioral, academic, and emotional problems from infancy to adolescence across western industrialized countries - even after controlling for confounders.\n\n\nCONCLUSIONS\nWhile the American Academy of Pediatrics already recommends routine screening for food insecurity during health maintenance visits, the evidence summarized here should encourage developmental behavioral health providers to screen for food insecurity in their practices and intervene when possible. Conversely, children whose families are identified as food insecure in primary care settings warrant enhanced developmental behavioral assessment and possible intervention.",
"title": ""
},
{
"docid": "e53c7f8890d3bf49272e08d4446703a4",
"text": "In orthogonal frequency-division multiplexing (OFDM) systems, it is generally assumed that the channel response is static in an OFDM symbol period. However, the assumption does not hold in high-mobility environments. As a result, intercarrier interference (ICI) is induced, and system performance is degraded. A simple remedy for this problem is the application of the zero-forcing (ZF) equalizer. Unfortunately, the direct ZF method requires the inversion of an N times N ICI matrix, where N is the number of subcarriers. When N is large, the computational complexity can become prohibitively high. In this paper, we first propose a low-complexity ZF method to solve the problem in single-input-single-output (SISO) OFDM systems. The main idea is to explore the special structure inherent in the ICI matrix and apply Newton's iteration for matrix inversion. With our formulation, fast Fourier transforms (FFTs) can be used in the iterative process, reducing the complexity from O (N3) to O (N log2 N). Another feature of the proposed algorithm is that it can converge very fast, typically in one or two iterations. We also analyze the convergence behavior of the proposed method and derive the theoretical output signal-to-interference-plus-noise ratio (SINR). For a multiple-input-multiple-output (MIMO) OFDM system, the complexity of the ZF method becomes more intractable. We then extend the method proposed for SISO-OFDM systems to MIMO-OFDM systems. It can be shown that the computational complexity can be reduced even more significantly. Simulations show that the proposed methods perform almost as well as the direct ZF method, while the required computational complexity is reduced dramatically.",
"title": ""
},
{
"docid": "e787357f66066c09cf3a8920edef1244",
"text": "The authors argue that a new six-dimensional framework for personality structure--the HEXACO model--constitutes a viable alternative to the well-known Big Five or five-factor model. The new model is consistent with the cross-culturally replicated finding of a common six-dimensional structure containing the factors Honesty-Humility (H), Emotionality (E), eExtraversion (X), Agreeableness (A), Conscientiousness (C), and Openness to Experience (O). Also, the HEXACO model predicts several personality phenomena that are not explained within the B5/FFM, including the relations of personality factors with theoretical biologists' constructs of reciprocal and kin altruism and the patterns of sex differences in personality traits. In addition, the HEXACO model accommodates several personality variables that are poorly assimilated within the B5/FFM.",
"title": ""
},
{
"docid": "52f20c62f13274d473de5aa179ccf37b",
"text": "The number of Internet auction shoppers is rapidly growing. However, online auction customers may suffer from auction fraud, sometimes without even noticing it. In-auction fraud differs from preand post-auction fraud in that it happens in the bidding period of an active auction. Since the in-auction fraud strategies are subtle and complex, it makes the fraudulent behavior more difficult to discover. Researchers from disciplines such as computer science and economics have proposed a number of methods to deal with in-auction fraud. In this paper, we summarize commonly seen indicators of in-auction fraud, provide a review of significant contributions in the literature of Internet in-auction fraud, and identify future challenging research tasks.",
"title": ""
},
{
"docid": "ce29ddfd7b3d3a28ddcecb7a5bb3ac8e",
"text": "Steganography consist of concealing secret information in a cover object to be sent over a public communication channel. It allows two parties to share hidden information in a way that no intruder can detect the presence of hidden information. This paper presents a novel steganography approach based on pixel location matching of the same cover image. Here the information is not directly embedded within the cover image but a sequence of 4 bits of secret data is compared to the 4 most significant bits (4MSB) of the cover image pixels. The locations of the matching pixels are taken to substitute the 2 least significant bits (2LSB) of the cover image pixels. Since the data are not directly hidden in cover image, the proposed approach is more secure and difficult to break. Intruders cannot intercept it by using common LSB techniques.",
"title": ""
},
{
"docid": "f6ae71fee81a8560f37cb0dccfd1e3cd",
"text": "Linguistic research to date has determined many of the principles that govern the structure of the spatial schemas represented by closed-class forms across the world’s languages. contributing to this cumulative understanding have, for example, been Gruber 1965, Fillmore 1968, Leech 1969, Clark 1973, Bennett 1975, Herskovits 1982, Jackendoff 1983, Zubin and Svorou 1984, as well as myself, Talmy 1983, 2000a, 2000b). It is now feasible to integrate these principles and to determine the comprehensive system they belong to for spatial structuring in spoken language. The finding here is that this system has three main parts: the componential, the compositional, and the augmentive.",
"title": ""
},
{
"docid": "123760f70d7f609dfe3cf3158a5cc23f",
"text": "We investigate national dialect identification, the task of classifying English documents according to their country of origin. We use corpora of known national origin as a proxy for national dialect. In order to identify general (as opposed to corpus-specific) characteristics of national dialects of English, we make use of a variety of corpora of different sources, with inter-corpus variation in length, topic and register. The central intuition is that features that are predictive of national origin across different data sources are features that characterize a national dialect. We examine a number of classification approaches motivated by different areas of research, and evaluate the performance of each method across 3 national dialects: Australian, British, and Canadian English. Our results demonstrate that there are lexical and syntactic characteristics of each national dialect that are consistent across data sources.",
"title": ""
},
{
"docid": "7835bb8463eff6a7fbeec256068e1f09",
"text": "Efforts to incorporate intelligence into the user interface have been underway for decades, but the commercial impact of this work has not lived up to early expectations, and is not immediately apparent. This situation appears to be changing. However, so far the most interesting intelligent user interfaces (IUIS) have tended to use minimal or simplistic AI. In this panel we consider whether more or less AI is the key to the development of compelling IUIS. The panelists will present examples of compelling IUIS that use a selection of AI techniques, mostly simple, but some complex. Each panelist will then comment on the merits of different kinds and quantities of AI in the development of pragmatic interface technology.",
"title": ""
},
{
"docid": "fd2b1d2a4d44f0535ceb6602869ffe1c",
"text": "A conventional FCM algorithm does not fully utilize the spatial information in the image. In this paper, we present a fuzzy c-means (FCM) algorithm that incorporates spatial information into the membership function for clustering. The spatial function is the summation of the membership function in the neighborhood of each pixel under consideration. The advantages of the new method are the following: (1) it yields regions more homogeneous than those of other methods, (2) it reduces the spurious blobs, (3) it removes noisy spots, and (4) it is less sensitive to noise than other techniques. This technique is a powerful method for noisy image segmentation and works for both single and multiple-feature data with spatial information.",
"title": ""
},
{
"docid": "90401f0e283bea2daed999de00dcacc5",
"text": "Steganography is a branch of information security which deals with transmission of message without being detected. Message, to be send, is embedded in a cover file. Different types of digital can be used as cover object, we used (.WAV) audio as our cover file in the research work. The objective of steganography is to shield the fact that the message exists in the transmission medium. Many algorithms have so far derived for this purpose can be categorized in terms of their embedding technique, time and space complexity. LSB is the acronym of „Least Significant Bit‟, is one of the algorithm that is considered as the easiest in way of hiding information in a digital media, also it has good efficiency. It perform its task by embedding secret message in the least significant bits of each data sample of audio file. Ease of cracking this algorithm makes it more prone to visual and statistical attacks. Keeping this in mind few improvisation are being done on LSB algorithm that reduces the ease of cracking message. Modified version of LSB algorithm which we call as „MODIFIED LSB ALGORITHM‟ uses the pseudo-random number generator to spread the secret message over the cover in a random manner. This algorithm will be more immune to statistical attacks without affecting its efficiency significantly.",
"title": ""
},
{
"docid": "fd455e27b023d849c59526655c5060da",
"text": "Face Detection is an important step in any face recognition systems, for the purpose of localizing and extracting face region from the rest of the images. There are many techniques, which have been proposed from simple edge detection techniques to advance techniques such as utilizing pattern recognition approaches. This paper evaluates two methods of face detection, her features and Local Binary Pattern features based on detection hit rate and detection speed. The algorithms were tested on Microsoft Visual C++ 2010 Express with OpenCV library. The experimental results show that Local Binary Pattern features are most efficient and reliable for the implementation of a real-time face detection system.",
"title": ""
},
{
"docid": "4ea07335d42a859768565c8d88cd5280",
"text": "This paper brings together research from two different fields – user modelling and web ontologies – in attempt to demonstrate how recent semantic trends in web development can be combined with the modern technologies of user modelling. Over the last several years, a number of user-adaptive systems have been exploiting ontologies for the purposes of semantics representation, automatic knowledge acquisition, domain and user model visualisation and creation of interoperable and reusable architectural solutions. Before discussing these projects, we first overview the underlying user modelling and ontological technologies. As an example of the project employing ontology-based user modelling, we present an experiment design for translation of overlay student models for relative domains by means of ontology mapping.",
"title": ""
},
{
"docid": "141cab8897e01abef28bf2c2a78874e1",
"text": "Botnet is a network of compromised computers controlled by the attacker(s) from remote locations via Command and Control (C&C) channels. The botnets are one of the largest global threats to the Internet-based commercial and social world. The decentralized Peer-to-Peer (P2P) botnets have appeared in the recent past and are growing at a faster pace. These P2P botnets are continuously evolving from diverse C&C protocols using hybrid structures and are turning to be more complicated and stealthy. In this paper, we present a comprehensive survey of the evolution, functionalities, modelling and the development life cycle of P2P botnets. Further, we investigate the various P2P botnet detection approaches. Finally, discuss the key research challenges useful for the research initiatives. This paper is useful in understanding the P2P botnets and gives an insight into the usefulness and limitations of the various P2P botnet detection techniques proposed by the researchers. The study will enable the researchers toward proposing the more useful detection techniques.",
"title": ""
},
{
"docid": "df679dcd213842a786c1ad9587c66f77",
"text": "The statistics of professional sports, including players and teams, provide numerous opportunities for research. Cricket is one of the most popular team sports, with billions of fans all over the world. In this thesis, we address two problems related to the One Day International (ODI) format of the game. First, we propose a novel method to predict the winner of ODI cricket matches using a team-composition based approach at the start of the match. Second, we present a method to quantitatively assess the performances of individual players in a match of ODI cricket which incorporates the game situations under which the players performed. The player performances are further used to predict the player of the match award. Players are the fundamental unit of a team. Players of one team work against the players of the opponent team in order to win a match. The strengths and abilities of the players of a team play a key role in deciding the outcome of a match. However, a team changes its composition depending on the match conditions, venue, and opponent team, etc. Therefore, we propose a novel dynamic approach which takes into account the varying strengths of the individual players and reflects the changes in player combinations over time. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual players’ batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Using the relative strength of one team versus the other, along with two player-independent features, namely, the toss outcome and the venue of the match, we evaluate multiple supervised machine learning algorithms to predict the winner of the match. We show that, for our approach, the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers. Players have multiple roles in a game of cricket, predominantly as batsmen and bowlers. Over the generations, statistics such as batting and bowling averages, and strike and economy rates have been used to judge the performance of individual players. These measures, however, do not take into consideration the context of the game in which a player performed across the course of a match. Further, these types of statistics are incapable of comparing the performance of players across different roles. Therefore, we present an approach to quantitatively assess the performances of individual players in a single match of ODI cricket. We have developed a new measure, called the Work Index, which represents the amount of work that is yet to be done by a team to achieve its target. Our approach incorporates game situations and the team strengths to measure the player contributions. This not only helps us in",
"title": ""
},
{
"docid": "11229bf95164064f954c25681c684a16",
"text": "This article proposes integrating the insights generated by framing, priming, and agenda-setting research through a systematic effort to conceptualize and understand their larger implications for political power and democracy. The organizing concept is bias, that curiously undertheorized staple of public discourse about the media. After showing how agenda setting, framing and priming fit together as tools of power, the article connects them to explicit definitions of news slant and the related but distinct phenomenon of bias. The article suggests improved measures of slant and bias. Properly defined and measured, slant and bias provide insight into how the media influence the distribution of power: who gets what, when, and how. Content analysis should be informed by explicit theory linking patterns of framing in the media text to predictable priming and agenda-setting effects on audiences. When unmoored by such underlying theory, measures and conclusions of media bias are suspect.",
"title": ""
},
{
"docid": "d529d1052fce64ae05fbc64d2b0450ab",
"text": "Today, many industrial companies must face problems raised by maintenance. In particular, the anomaly detection problem is probably one of the most challenging. In this paper we focus on the railway maintenance task and propose to automatically detect anomalies in order to predict in advance potential failures. We first address the problem of characterizing normal behavior. In order to extract interesting patterns, we have developed a method to take into account the contextual criteria associated to railway data (itinerary, weather conditions, etc.). We then measure the compliance of new data, according to extracted knowledge, and provide information about the seriousness and the exact localization of a detected anomaly. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f4df305ad32ebdd1006eefdec6ee7ca3",
"text": "In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8-11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.",
"title": ""
},
{
"docid": "d5330d3045a27f2c59ef01903b87a54e",
"text": "Industrial Control and SCADA (Supervisory Control and Data Acquisition) networks control critical infrastructure such as power plants, nuclear facilities, and water supply systems. These systems are increasingly the target of cyber attacks by threat actors of different kinds, with successful attacks having the potential to cause damage, cost and injury/loss of life. As a result, there is a strong need for enhanced tools to detect cyber threats in SCADA networks. This paper makes a number of contributions to advance research in this area. First, we study the level of support for SCADA protocols in well-known open source intrusion detection systems (IDS). Second, we select a specific IDS, Suricata, and enhance it to include support for detecting threats against SCADA systems running the EtherNet/IP (ENIP) industrial control protocol. Finally, we conduct a traffic-based study to evaluate the performance of the new ENIP module in Suricata - analyzing its performance in low performance hardware systems.",
"title": ""
},
{
"docid": "86820c43e63066930120fa5725b5b56d",
"text": "We introduce Wiktionary as an emerging lexical semantic resource that can be used as a substitute for expert-made resources in AI applications. We evaluate Wiktionary on the pervasive task of computing semantic relatedness for English and German by means of correlation with human rankings and solving word choice problems. For the first time, we apply a concept vector based measure to a set of different concept representations like Wiktionary pseudo glosses, the first paragraph of Wikipedia articles, English WordNet glosses, and GermaNet pseudo glosses. We show that: (i) Wiktionary is the best lexical semantic resource in the ranking task and performs comparably to other resources in the word choice task, and (ii) the concept vector based approach yields the best results on all datasets in both evaluations.",
"title": ""
}
] |
scidocsrr
|
6b9ce507f12ba3036f9c580491e845e3
|
TLTD: A Testing Framework for Learning-Based IoT Traffic Detection Systems
|
[
{
"docid": "67e85e8b59ec7dc8b0019afa8270e861",
"text": "Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.",
"title": ""
},
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
},
{
"docid": "580d83a0e627daedb45fe55e3f9b6883",
"text": "With near exponential growth predicted in the number of Internet of Things (IoT) based devices within networked systems there is need of a means of providing their flexible and secure integration. Software Defined Networking (SDN) is a concept that allows for the centralised control and configuration of network devices, and also provides opportunities for the dynamic control of network traffic. This paper proposes the use of an SDN gateway as a distributed means of monitoring the traffic originating from and directed to IoT based devices. This gateway can then both detect anomalous behaviour and perform an appropriate response (blocking, forwarding, or applying Quality of Service). Initial results demonstrate that, while the addition of the attack detection functionality has an impact on the number of flow installations possible per second, it can successfully detect and block TCP and ICMP flood based attacks.",
"title": ""
},
{
"docid": "11a69c06f21e505b3e05384536108325",
"text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"title": ""
}
] |
[
{
"docid": "f400ca4fe8fc5c684edf1ae60e026632",
"text": "Driverless vehicles will be common on the road in a short time. They will have many impacts on the global transport market trends. One of the remarkable driverless vehicles impacts will be the laying aside of rail systems, because of several reasons, that is to say traffic congestions will be no more a justification for rail, rail will not be the best answer for disableds, air pollution of cars are more or less equal to air pollution of trains and the last but not least reason is that driverless cars are safer than trains.",
"title": ""
},
{
"docid": "6171a708ea6470b837439ad23af90dff",
"text": "Cardiovascular diseases represent a worldwide relevant socioeconomical problem. Cardiovascular disease prevention relies also on lifestyle changes, including dietary habits. The cardioprotective effects of several foods and dietary supplements in both animal models and in humans have been explored. It was found that beneficial effects are mainly dependent on antioxidant and anti-inflammatory properties, also involving modulation of mitochondrial function. Resveratrol is one of the most studied phytochemical compounds and it is provided with several benefits in cardiovascular diseases as well as in other pathological conditions (such as cancer). Other relevant compounds are Brassica oleracea, curcumin, and berberine, and they all exert beneficial effects in several diseases. In the attempt to provide a comprehensive reference tool for both researchers and clinicians, we summarized in the present paper the existing literature on both preclinical and clinical cardioprotective effects of each mentioned phytochemical. We structured the discussion of each compound by analyzing, first, its cellular molecular targets of action, subsequently focusing on results from applications in both ex vivo and in vivo models, finally discussing the relevance of the compound in the context of human diseases.",
"title": ""
},
{
"docid": "94316059aba51baedd5662e7246e23c1",
"text": "The increased need of content based image retrieval technique can be found in a number of different domains such as Data Mining, Education, Medical Imaging, Crime Prevention, Weather forecasting, Remote Sensing and Management of Earth Resources. This paper presents the content based image retrieval, using features like texture and color, called WBCHIR (Wavelet Based Color Histogram Image Retrieval).The texture and color features are extracted through wavelet transformation and color histogram and the combination of these features is robust to scaling and translation of objects in an image. The proposed system has demonstrated a promising and faster retrieval method on a WANG image database containing 1000 general-purpose color images. The performance has been evaluated by comparing with the existing systems in the literature.",
"title": ""
},
{
"docid": "6560a704d5f8022193b60dd3ad213d5a",
"text": "Despite web access on mobile devices becoming commonplace, users continue to experience poor web performance on these devices. Traditional approaches for improving web performance (e.g., compression, SPDY, faster browsers) face an uphill battle due to the fundamentally conflicting trends in user expectations of lower load times and richer web content. Embracing the reality that page load times will continue to be higher than user tolerance limits for the foreseeable future, we ask: How can we deliver the best possible user experience? To this end, we present KLOTSKI, a system that prioritizes the content most relevant to a user’s preferences. In designing KLOTSKI, we address several challenges in: (1) accounting for inter-resource dependencies on a page; (2) enabling fast selection and load time estimation for the subset of resources to be prioritized; and (3) developing a practical implementation that requires no changes to websites. Across a range of user preference criteria, KLOTSKI can significantly improve the user experience relative to native websites.",
"title": ""
},
{
"docid": "7e40c7145f4613f12e7fc13646f3927c",
"text": "One strategy for intelligent agents in order to reach their goals is to plan their actions in advance. This can be done by simulating how the agent’s actions affect the environment and how it evolves independently of the agent. For this simulation, a model of the environment is needed. However, the creation of this model might be labor-intensive and it might be computational complex to evaluate during simulation. That is why, we suggest to equip an intelligent agent with a learned intuition about the dynamics of its environment by utilizing the concept of intuitive physics. To demonstrate our approach, we used an agent that can freely move in a two dimensional floor plan. It has to collect moving targets while avoiding the collision with static and dynamic obstacles. In order to do so, the agent plans its actions up to a defined planning horizon. The performance of our agent, which intuitively estimates the dynamics of its surrounding objects based on artificial neural networks, is compared to an agent which has a physically exact model of the world and one that acts randomly. The evaluation shows comparatively good results for the intuition based agent considering it uses only a quarter of the computation time in comparison to the agent with a physically exact model.",
"title": ""
},
{
"docid": "091d9afe87fa944548b9f11386112d6e",
"text": "In a cognitive radio network, the secondary users are allowed to utilize the frequency bands of primary users when these bands are not currently being used. To support this spectrum reuse functionality, the secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance in cognitive radio networks. There are two parameters associated with spectrum sensing: probability of detection and probability of false alarm. The higher the probability of detection, the better the primary users are protected. However, from the secondary users' perspective, the lower the probability of false alarm, the more chances the channel can be reused when it is available, thus the higher the achievable throughput for the secondary network. In this paper, we study the problem of designing the sensing duration to maximize the achievable throughput for the secondary network under the constraint that the primary users are sufficiently protected. We formulate the sensing-throughput tradeoff problem mathematically, and use energy detection sensing scheme to prove that the formulated problem indeed has one optimal sensing time which yields the highest throughput for the secondary network. Cooperative sensing using multiple mini-slots or multiple secondary users are also studied using the methodology proposed in this paper. Computer simulations have shown that for a 6 MHz channel, when the frame duration is 100 ms, and the signal-to-noise ratio of primary user at the secondary receiver is -20 dB, the optimal sensing time achieving the highest throughput while maintaining 90% detection probability is 14.2 ms. This optimal sensing time decreases when distributed spectrum sensing is applied.",
"title": ""
},
{
"docid": "f58a1a0d8cc0e2c826c911be4451e0df",
"text": "From an accessibility perspective, voice-controlled, home-based intelligent personal assistants (IPAs) have the potential to greatly expand speech interaction beyond dictation and screen reader output. To examine the accessibility of off-the-shelf IPAs (e.g., Amazon Echo) and to understand how users with disabilities are making use of these devices, we conducted two exploratory studies. The first, broader study is a content analysis of 346 Amazon Echo reviews that include users with disabilities, while the second study more specifically focuses on users with visual impairments, through interviews with 16 current users of home-based IPAs. Findings show that, although some accessibility challenges exist, users with a range of disabilities are using the Amazon Echo, including for unexpected cases such as speech therapy and support for caregivers. Richer voice-based applications and solutions to support discoverability would be particularly useful to users with visual impairments. These findings should inform future work on accessible voice-based IPAs.",
"title": ""
},
{
"docid": "374674cc8a087d31ee2c801f7e49aa8d",
"text": "Two biological control agents, Bacillus subtilis AP-01 (Larminar(™)) and Trichoderma harzianum AP-001 (Trisan(™)) alone or/in combination were investigated in controlling three tobacco diseases, including bacterial wilt (Ralstonia solanacearum), damping-off (Pythium aphanidermatum), and frogeye leaf spot (Cercospora nicotiana). Tests were performed in greenhouse by soil sterilization prior to inoculation of the pathogens. Bacterial-wilt and damping off pathogens were drenched first and followed with the biological control agents and for comparison purposes, two chemical fungicides. But for frogeye leaf spot, which is an airborne fungus, a spraying procedure for every treatment including a chemical fungicide was applied instead of drenching. Results showed that neither B. subtilis AP-01 nor T harzianum AP-001 alone could control the bacterial wilt, but when combined, their controlling capabilities were as effective as a chemical treatment. These results were also similar for damping-off disease when used in combination. In addition, the combined B. subtilis AP-01 and T. harzianum AP-001 resulted in a good frogeye leaf spot control, which was not significantly different from the chemical treatment.",
"title": ""
},
{
"docid": "32b860121b49bd3a61673b3745b7b1fd",
"text": "Online reviews are a growing market, but it is struggling with fake reviews. They undermine both the value of reviews to the user, and their trust in the review sites. However, fake positive reviews can boost a business, and so a small industry producing fake reviews has developed. The two sides are facing an arms race that involves more and more natural language processing (NLP). So far, NLP has been used mostly for detection, and works well on human-generated reviews. But what happens if NLP techniques are used to generate fake reviews as well? We investigate the question in an adversarial setup, by assessing the detectability of different fake-review generation strategies. We use generative models to produce reviews based on meta-information, and evaluate their effectiveness against deceptiondetection models and human judges. We find that meta-information helps detection, but that NLP-generated reviews conditioned on such information are also much harder to detect than conventional ones.",
"title": ""
},
{
"docid": "f70ce9d95ac15fc0800b8e6ac60247cb",
"text": "Many systems for the parallel processing of big data are available today. Yet, few users can tell by intuition which system, or combination of systems, is \"best\" for a given workflow. Porting workflows between systems is tedious. Hence, users become \"locked in\", despite faster or more efficient systems being available. This is a direct consequence of the tight coupling between user-facing front-ends that express workflows (e.g., Hive, SparkSQL, Lindi, GraphLINQ) and the back-end execution engines that run them (e.g., MapReduce, Spark, PowerGraph, Naiad).\n We argue that the ways that workflows are defined should be decoupled from the manner in which they are executed. To explore this idea, we have built Musketeer, a workflow manager which can dynamically map front-end workflow descriptions to a broad range of back-end execution engines.\n Our prototype maps workflows expressed in four high-level query languages to seven different popular data processing systems. Musketeer speeds up realistic workflows by up to 9x by targeting different execution engines, without requiring any manual effort. Its automatically generated back-end code comes within 5%--30% of the performance of hand-optimized implementations.",
"title": ""
},
{
"docid": "11a2882124e64bd6b2def197d9dc811a",
"text": "1 Abstract— Clustering is the most acceptable technique to analyze the raw data. Clustering can help detect intrusions when our training data is unlabeled, as well as for detecting new and unknown types of intrusions. In this paper we are trying to analyze the NSL-KDD dataset using Simple K-Means clustering algorithm. We tried to cluster the dataset into normal and four of the major attack categories i.e. DoS, Probe, R2L, U2R. Experiments are performed in WEKA environment. Results are verified and validated using test dataset. Our main objective is to provide the complete analysis of NSL-KDD intrusion detection dataset.",
"title": ""
},
{
"docid": "b44ebb850ce2349dddc35bbf9a01fb8a",
"text": "Automatically assessing emotional valence in human speech has historically been a difficult task for machine learning algorithms. The subtle changes in the voice of the speaker that are indicative of positive or negative emotional states are often “overshadowed” by voice characteristics relating to emotional intensity or emotional activation. In this work we explore a representation learning approach that automatically derives discriminative representations of emotional speech. In particular, we investigate two machine learning strategies to improve classifier performance: (1) utilization of unlabeled data using a deep convolutional generative adversarial network (DCGAN), and (2) multitask learning. Within our extensive experiments we leverage a multitask annotated emotional corpus as well as a large unlabeled meeting corpus (around 100 hours). Our speaker-independent classification experiments show that in particular the use of unlabeled data in our investigations improves performance of the classifiers and both fully supervised baseline approaches are outperformed considerably. We improve the classification of emotional valence on a discrete 5-point scale to 43.88% and on a 3-point scale to 49.80%, which is competitive to state-of-the-art performance.",
"title": ""
},
{
"docid": "ccaba0b30fc1a0c7d55d00003b07725a",
"text": "We collect a corpus of 1554 online news articles from 23 RSS feeds and analyze it in terms of controversy and sentiment. We use several existing sentiment lexicons and lists of controversial terms to perform a number of statistical analyses that explore how sentiment and controversy are related. We conclude that the negative sentiment and controversy are not necessarily positively correlated as has been claimed in the past. In addition, we apply an information theoretic approach and suggest that entropy might be a good predictor of controversy.",
"title": ""
},
{
"docid": "6a2e5831f2a2e1625be2bfb7941b9d1b",
"text": "Benefited from cloud storage services, users can save their cost of buying expensive storage and application servers, as well as deploying and maintaining applications. Meanwhile they lost the physical control of their data. So effective methods are needed to verify the correctness of the data stored at cloud servers, which are the research issues the Provable Data Possession (PDP) faced. The most important features in PDP are: 1) supporting for public, unlimited numbers of times of verification; 2) supporting for dynamic data update; 3) efficiency of storage space and computing. In mobile cloud computing, mobile end-users also need the PDP service. However, the computing workloads and storage burden of client in existing PDP schemes are too heavy to be directly used by the resource-constrained mobile devices. To solve this problem, with the integration of the trusted computing technology, this paper proposes a novel public PDP scheme, in which the trusted third-party agent (TPA) takes over most of the calculations from the mobile end-users. By using bilinear signature and Merkle hash tree (MHT), the scheme aggregates the verification tokens of the data file into one small signature to reduce communication and storage burden. MHT is also helpful to support dynamic data update. In our framework, the mobile terminal devices only need to generate some secret keys and random numbers with the help of trusted platform model (TPM) chips, and the needed computing workload and storage space is fit for mobile devices. Our scheme realizes provable secure storage service for resource-constrained mobile devices in mobile cloud computing.",
"title": ""
},
{
"docid": "53e668839e9d7e065dc7864830623790",
"text": "Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided.",
"title": ""
},
{
"docid": "9381ba0001262dd29d7ca74a98a56fc7",
"text": "Despite several advances in information retrieval systems and user interfaces, the specification of queries over text-based document collections remains a challenging problem. Query specification with keywords is a popular solution. However, given the widespread adoption of gesture-driven interfaces such as multitouch technologies in smartphones and tablets, the lack of a physical keyboard makes query specification with keywords inconvenient. We present BinGO, a novel gestural approach to querying text databases that allows users to refine their queries using a swipe gesture to either \"like\" or \"dislike\" candidate documents as well as express the reasons they like or dislike a document by swiping through automatically generated \"reason bins\". Such reasons refine a user's query with additional keywords. We present an online and efficient bin generation algorithm that presents reason bins at gesture articulation. We motivate and describe BinGo's unique interface design choices. Based on our analysis and user studies, we demonstrate that query specification by swiping through reason bins is easy and expressive.",
"title": ""
},
{
"docid": "8d4c66f9e12c1225df1e79628d666702",
"text": "Recently, wavelet transforms have gained very high attention in many fields and applications such as physics, engineering, signal processing, applied mathematics and statistics. In this paper, we present the advantage of wavelet transforms in forecasting financial time series data. Amman stock market (Jordan) was selected as a tool to show the ability of wavelet transform in forecasting financial time series, experimentally. This article suggests a novel technique for forecasting the financial time series data, based on Wavelet transforms and ARIMA model. Daily return data from 1993 until 2009 is used for this study. 316 S. Al Wadi et al",
"title": ""
},
{
"docid": "1977e7813b15ffb3a4238f3ed40f0e1f",
"text": "Despite the existence of standard protocol, many stabilization centers (SCs) continue to experience high mortality of children receiving treatment for severe acute malnutrition. Assessing treatment outcomes and identifying predictors may help to overcome this problem. Therefore, a 30-month retrospective cohort study was conducted among 545 randomly selected medical records of children <5 years of age admitted to SCs in Gedeo Zone. Data was entered by Epi Info version 7 and analyzed by STATA version 11. Cox proportional hazards model was built by forward stepwise procedure and compared by the likelihood ratio test and Harrell's concordance, and fitness was checked by Cox-Snell residual plot. During follow-up, 51 (9.3%) children had died, and 414 (76%) and 26 (4.8%) children had recovered and defaulted (missed follow-up for 2 consecutive days), respectively. The survival rates at the end of the first, second and third weeks were 95.3%, 90% and 85%, respectively, and the overall mean survival time was 79.6 days. Age <24 months (adjusted hazard ratio [AHR] =2.841, 95% confidence interval [CI] =1.101-7.329), altered pulse rate (AHR =3.926, 95% CI =1.579-9.763), altered temperature (AHR =7.173, 95% CI =3.05-16.867), shock (AHR =3.805, 95% CI =1.829-7.919), anemia (AHR =2.618, 95% CI =1.148-5.97), nasogastric tube feeding (AHR =3.181, 95% CI =1.18-8.575), hypoglycemia (AHR =2.74, 95% CI =1.279-5.87) and treatment at hospital stabilization center (AHR =4.772, 95% CI =1.638-13.9) were independent predictors of mortality. The treatment outcomes and incidence of death were in the acceptable ranges of national and international standards. Intervention to further reduce deaths has to focus on young children with comorbidities and altered general conditions.",
"title": ""
},
{
"docid": "1839d9e6ef4bad29381105f0a604b731",
"text": "Our focus is on the effects that dated ideas about the nature of science (NOS) have on curriculum, instruction and assessments. First we examine historical developments in teaching about NOS, beginning with the seminal ideas of James Conant. Next we provide an overview of recent developments in philosophy and cognitive sciences that have shifted NOS characterizations away from general heuristic principles toward cognitive and social elements. Next, we analyze two alternative views regarding ‘explicitly teaching’ NOS in pre-college programs. Version 1 is grounded in teachers presenting ‘Consensus-based Heuristic Principles’ in science lessons and activities. Version 2 is grounded in learners experience of ‘Building and Refining Model-Based Scientific Practices’ in critique and communication enactments that occur in longer immersion units and learning progressions. We argue that Version 2 is to be preferred over Version 1 because it develops the critical epistemic cognitive and social practices that scientists and science learners use when (1) developing and evaluating scientific evidence, explanations and knowledge and (2) critiquing and communicating scientific ideas and information; thereby promoting science literacy. 1 NOS and Science Education When and how did knowledge about science, as opposed to scientific content knowledge, become a targeted outcome of science education? From a US perspective, the decades of interest are the 1940s and 1950s when two major post-war developments in science education policy initiatives occurred. The first, in post secondary education, was the GI Bill An earlier version of this paper was presented as a plenary session by the first author at the ‘How Science Works—And How to Teach It’ workshop, Aarhus University, 23–25 June, 2011, Denmark. R. A. Duschl (&) The Pennsylvania State University, University Park, PA, USA e-mail: [email protected] R. Grandy Rice University, Houston, TX, USA 123 Sci & Educ DOI 10.1007/s11191-012-9539-4",
"title": ""
},
{
"docid": "e289f0f11ee99c57ede48988cc2dbd5c",
"text": "Generative Adversarial Networks (GANs) are becoming popular choices for unsupervised learning. At the same time there is a concerted effort in the machine learning community to expand the range of tasks in which learning can be applied as well as to utilize methods from other disciplines to accelerate learning. With this in mind, in the current work we suggest ways to enforce given constraints in the output of a GAN both for interpolation and extrapolation. The two cases need to be treated differently. For the case of interpolation, the incorporation of constraints is built into the training of the GAN. The incorporation of the constraints respects the primary gametheoretic setup of a GAN so it can be combined with existing algorithms. However, it can exacerbate the problem of instability during training that is well-known for GANs. We suggest adding small noise to the constraints as a simple remedy that has performed well in our numerical experiments. The case of extrapolation (prediction) is more involved. First, we employ a modified interpolation training process that uses noisy data but does not necessarily enforce the constraints during training. Second, the resulting modified interpolator is used for extrapolation where the constraints are enforced after each step through projection on the space of constraints.",
"title": ""
}
] |
scidocsrr
|
558d627e2c9607358ed49acecb4b7509
|
Real-time Human Pose Estimation from Video with Convolutional Neural Networks
|
[
{
"docid": "7fa9bacbb6b08065ecfe0530f082a391",
"text": "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.",
"title": ""
},
{
"docid": "bd6757398f7e612efa66bf60f81d4fa7",
"text": "In this paper we consider the problem of human pose estimation from a single still image. We propose a novel approach where each location in the image votes for the position of each keypoint using a convolutional neural net. The voting scheme allows us to utilize information from the whole image, rather than rely on a sparse set of keypoint locations. Using dense, multi-target votes, not only produces good keypoint predictions, but also enables us to compute image-dependent joint keypoint probabilities by looking at consensus voting. This differs from most previous methods where joint probabilities are learned from relative keypoint locations and are independent of the image. We finally combine the keypoints votes and joint probabilities in order to identify the optimal pose configuration. We show our competitive performance on the MPII Human Pose and Leeds Sports Pose datasets.",
"title": ""
}
] |
[
{
"docid": "da3ba9c7e5000b5e957c961382da8409",
"text": "This paper presents a design, fabrication and characterization of a low-cost capacitive tilt sensor. The proposed sensor consists of a three-electrode capacitor, which contains two-phase of the air and liquid as the dielectric media. The three electrodes hold a plastic tube and the tube is positioned on a printed circuit board (PCB) which consists of a 127 kHz sine wave generator, a pre-amplifier, a rectifier and a low pass filter. The proposed sensor structure can measure tilt angles in the rage of 0° to 75°, where the linear relationship between the angle to be measured and the output signal was observed in the range of 0° to 50°. The sensitivity and resolution of the sensor are measured to be 40mV/degree and 0.5 degree, respectively.",
"title": ""
},
{
"docid": "5bd93d7d993df9d2a8566721aa84a7ed",
"text": "The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and use a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.",
"title": ""
},
{
"docid": "3874d10936841f59647d73f750537d96",
"text": "The number of studies comparing nutritional quality of restrictive diets is limited. Data on vegan subjects are especially lacking. It was the aim of the present study to compare the quality and the contributing components of vegan, vegetarian, semi-vegetarian, pesco-vegetarian and omnivorous diets. Dietary intake was estimated using a cross-sectional online survey with a 52-items food frequency questionnaire (FFQ). Healthy Eating Index 2010 (HEI-2010) and the Mediterranean Diet Score (MDS) were calculated as indicators for diet quality. After analysis of the diet questionnaire and the FFQ, 1475 participants were classified as vegans (n = 104), vegetarians (n = 573), semi-vegetarians (n = 498), pesco-vegetarians (n = 145), and omnivores (n = 155). The most restricted diet, i.e., the vegan diet, had the lowest total energy intake, better fat intake profile, lowest protein and highest dietary fiber intake in contrast to the omnivorous diet. Calcium intake was lowest for the vegans and below national dietary recommendations. The vegan diet received the highest index values and the omnivorous the lowest for HEI-2010 and MDS. Typical aspects of a vegan diet (high fruit and vegetable intake, low sodium intake, and low intake of saturated fat) contributed substantially to the total score, independent of the indexing system used. The score for the more prudent diets (vegetarians, semi-vegetarians and pesco-vegetarians) differed as a function of the used indexing system but they were mostly better in terms of nutrient quality than the omnivores.",
"title": ""
},
{
"docid": "9b10757ca3ca84784033c20f064078b7",
"text": "Snafu, or Snake Functions, is a modular system to host, execute and manage language-level functions offered as stateless (micro-)services to diverse external triggers. The system interfaces resemble those of commercial FaaS providers but its implementation provides distinct features which make it overall useful to research on FaaS and prototyping of FaaSbased applications. This paper argues about the system motivation in the presence of already existing alternatives, its design and architecture, the open source implementation and collected metrics which characterise the system.",
"title": ""
},
{
"docid": "96fa50abd2a4fcff47af85f07b4e9d5d",
"text": "Complex biological systems and cellular networks may underlie most genotype to phenotype relationships. Here, we review basic concepts in network biology, discussing different types of interactome networks and the insights that can come from analyzing them. We elaborate on why interactome networks are important to consider in biology, how they can be mapped and integrated with each other, what global properties are starting to emerge from interactome network models, and how these properties may relate to human disease.",
"title": ""
},
{
"docid": "25762ced8056219ae74d4dda941959d8",
"text": "Services like chatbots that provide information to customers in real-time are of increasing importance for the online market. Chatbots offer an intuitive interface to answer user requests in an interactive manner. The inquiries are of wide-range and include information about specific goods and services but also financial issues and personal advices. The notable advantages of these programs are the simplicity of use and speed of the search process. In some cases, chatbots have even surpassed classical web, mobile applications, and social networks. Chatbots might have access to huge amount of data or personal information. Therefore, they might be a valuable target for hackers, and known web application vulnerabilities might be a security issue for chatbots as well. In this paper, we discuss the challenges of security testing for chatbots. We provide an overview about an automated testing approach adapted to chatbots, and first experimental results.",
"title": ""
},
{
"docid": "439485763ec50c6a1e843f98950e4b7d",
"text": "Currently the large surplus of glycerol formed as a by-product during the production of biodiesel offered an abundant and low cost feedstock. Researchers showed a surge of interest in using glycerol as renewable feedstock to produce functional chemicals. This Minireview focuses on recent developments in the conversion of glycerol into valueadded products, including citric acid, lactic acid, 1,3-dihydroxyacetone (DHA), 1,3-propanediol (1,3-PD), dichloro-2propanol (DCP), acrolein, hydrogen, and ethanol etc. The versatile new applications of glycerol in the everyday life and chemical industry will improve the economic viability of the biodiesel industry.",
"title": ""
},
{
"docid": "ebe7bca5a5f18152ef1eee6d545489cd",
"text": "This paper proposed a design, implementation & performance of an energy efficient solar tracking system based on closed loop technique. This solar tracking system is autonomous dual axis hybrid type. Solar efficiency depends on PV cell and its tracking system. In our works we concentrate on the panel tracking system. Our tracking system is a sensor based tracking system and it can track the sun continuously. A sensor placed on the surface of the panel and sensor compare light intensity continuously. Then our control unit sends signal to actuator unit to reposition the panel. This system is energy efficient because the actuator unit shut down in cloudy weather and turns on in sunny weather. Powerful microcontroller used to calculate and evaluate the light intensity form sensor unit, then send instruction to actuator unit. For graceful and accurate angular motion we used the servo actuator system. It's no need additional real time clock to track annual motion and daily motion because it's based on active closed loop system. Our implemented design is more efficient and convenient than other tracker",
"title": ""
},
{
"docid": "f69d669235d54858eb318b53cdadcb47",
"text": "We present a complete vision guided robot system for model based 3D pose estimation and picking of singulated 3D objects. Our system employs a novel vision sensor consisting of a video camera surrounded by eight flashes (light emitting diodes). By capturing images under different flashes and observing the shadows, depth edges or silhouettes in the scene are obtained. The silhouettes are segmented into different objects and each silhouette is matched across a database of object silhouettes in different poses to find the coarse 3D pose. The database is pre-computed using a Computer Aided Design (CAD) model of the object. The pose is refined using a fully projective formulation [ACB98] of Lowe’s model based pose estimation algorithm [Low91, Low87]. The estimated pose is transferred to robot coordinate system utilizing the handeye and camera calibration parameters, which allows the robot to pick the object. Our system outperforms conventional systems using 2D sensors with intensity-based features as well as 3D sensors. We handle complex ambient illumination conditions, challenging specular backgrounds, diffuse as well as specular objects, and texture-less objects, on which traditional systems usually fail. Our vision sensor is capable of computing depth edges in real time and is low cost. Our approach is simple and fast for practical implementation. We present real experimental results using our custom designed sensor mounted on a robot arm to demonstrate the effectiveness of our technique. International Journal of Robotics Research This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2009 201 Broadway, Cambridge, Massachusetts 02139",
"title": ""
},
{
"docid": "df70cb4b1d37680cccb7d79bdea5d13b",
"text": "In this paper, we describe a system for automatic construction of user disease progression timelines from their posts in online support groups using minimal supervision. In recent years, several online support groups have been established which has led to a huge increase in the amount of patient-authored text available. Creating systems which can automatically extract important medical events and create disease progression timelines for users from such text can help in patient health monitoring as well as studying links between medical events and users’ participation in support groups. Prior work in this domain has used manually constructed keyword sets to detect medical events. In this work, our aim is to perform medical event detection using minimal supervision in order to develop a more general timeline construction system. Our system achieves an accuracy of 55.17%, which is 92% of the performance achieved by a supervised baseline system.",
"title": ""
},
{
"docid": "8adb07a99940383139f0d4ed32f68f7c",
"text": "The gene ASPM (abnormal spindle-like microcephaly associated) is a specific regulator of brain size, and its evolution in the lineage leading to Homo sapiens was driven by strong positive selection. Here, we show that one genetic variant of ASPM in humans arose merely about 5800 years ago and has since swept to high frequency under strong positive selection. These findings, especially the remarkably young age of the positively selected variant, suggest that the human brain is still undergoing rapid adaptive evolution.",
"title": ""
},
{
"docid": "501d6ec6163bc8b93fd728412a3e97f3",
"text": "This short paper describes our ongoing research on Greenhouse a zero-positive machine learning system for time-series anomaly detection.",
"title": ""
},
{
"docid": "acb569b267eae92a6e33b52725f28833",
"text": "A multi-objective design procedure is applied to the design of a close-coupled inductor for a three-phase interleaved 140kW DC-DC converter. For the multi-objective optimization, a genetic algorithm is used in combination with a detailed physical model of the inductive component. From the solution of the optimization, important conclusions about the advantages and disadvantages of using close-coupled inductors compared to separate inductors can be drawn.",
"title": ""
},
{
"docid": "9eef13dc72daa4ec6cce816c61364d2d",
"text": "Bootstrapping is a crucial operation in Gentry’s breakthrough work on fully homomorphic encryption (FHE), where a homomorphic encryption scheme evaluates its own decryption algorithm. There has been a couple of implementations of bootstrapping, among which HElib arguably marks the state-of-the-art in terms of throughput, ciphertext/message size ratio and support for large plaintext moduli. In this work, we applied a family of “lowest digit removal” polynomials to design an improved homomorphic digit extraction algorithm which is a crucial part in bootstrapping for both FV and BGV schemes. When the secret key has 1-norm h = ||s||1 and the plaintext modulus is t = p, we achieved bootstrapping depth log h + log(logp(ht)) in FV scheme. In case of the BGV scheme, we brought down the depth from log h+ 2 log t to log h + log t. We implemented bootstrapping for FV in the SEAL library. We also introduced another “slim mode”, which restrict the plaintexts to batched vectors in Zpr . The slim mode has similar throughput as the full mode, while each individual run is much faster and uses much smaller memory. For example, bootstrapping takes 6.75 seconds for vectors over GF (127) with 64 slots and 1381 seconds for vectors over GF (257) with 128 slots. We also implemented our improved digit extraction procedure for the BGV scheme in HElib.",
"title": ""
},
{
"docid": "1c63438d58ef3817ce9b637bddc57fc1",
"text": "Object recognition strategies are increasingly based on regional descriptors such as SIFT or HOG at a sparse set of points or on a dense grid of points. Despite their success on databases such as PASCAL and CALTECH, the capability of such a representation in capturing the essential object content of the image is not well-understood: How large is the equivalence class of images sharing the same HOG descriptor? Are all these images from the same object category, and if not, do the non-category images resemble random images which cannot generically arise from imaged scenes? How frequently do images from two categories share the same HOG-based representation? These questions are increasingly more relevant as very large databases such as ImageNet and LabelMe are being developed where the current object recognition strategies show limited success. We examine these questions by introducing the metameric class of moments of HOG which allows for a target image to be morphed into an impostor image sharing the HOG representation of a source image while retaining the initial visual appearance. We report that two distinct images can be made to share the same HOG representation when the overlap between HOG patches is minimal, and the success of this method falls with increasing overlap. This paper is therefore a step in the direction of developing a sampling theorem for representing images by HOG features.",
"title": ""
},
{
"docid": "d8366c1abefa327b44e66bcfffaf11f9",
"text": "Reliable understanding of the 3D driving environment is vital for obstacle detection and adaptive cruise control (ACC) applications. Laser or millimeter wave radars have shown good performance in measuring relative speed and distance in a highway driving environment. However the accuracy of these systems decreases in an urban traffic environment as more confusion occurs due to factors such as parked vehicles, guardrails, poles and motorcycles. A stereovision based sensing system provides an effective supplement to radar-based road scene analysis with its much wider field of view and more accurate lateral information. This paper presents an efficient solution using a stereovision based road scene analysis algorithm which employs the \"U-V-disparity\" concept. This concept is used to classify a 3D road scene into relative surface planes and characterize the features of road pavement surfaces, roadside structures and obstacles. Real-time implementation of the disparity map calculation and the \"U-V-disparity\" classification is also presented.",
"title": ""
},
{
"docid": "e2df843bd6b491e904cc98f746c3314a",
"text": "Cryonic suspension is a relatively new technology that offers those who can afford it the chance to be 'frozen' for future revival when they reach the ends of their lives. This paper will examine the ethical status of this technology and whether its use can be justified. Among the arguments against using this technology are: it is 'against nature', and would change the very concept of death; no friends or family of the 'freezee' will be left alive when he is revived; the considerable expense involved for the freezee and the future society that will revive him; the environmental cost of maintaining suspension; those who wish to use cryonics might not live life to the full because they would economize in order to afford suspension; and cryonics could lead to premature euthanasia in order to maximize chances of success. Furthermore, science might not advance enough to ever permit revival, and reanimation might not take place due to socio-political or catastrophic reasons. Arguments advanced by proponents of cryonics include: the potential benefit to society; the ability to cheat death for at least a few more years; the prospect of immortality if revival is successful; and all the associated benefits that delaying or avoiding dying would bring. It emerges that it might be imprudent not to use the technology, given the relatively minor expense involved and the potential payoff. An adapted and more persuasive version of Pascal's Wager is presented and offered as a conclusive argument in favour of utilizing cryonic suspension.",
"title": ""
},
{
"docid": "b4d234b09b642b228e71bf3dee52ff62",
"text": "The Recurrent Neural Networks and their variants have shown promising performances in sequence modeling tasks such as Natural Language Processing. These models, however, turn out to be impractical and difficult to train when exposed to very high-dimensional inputs due to the large input-to-hidden weight matrix. This may have prevented RNNs’ large-scale application in tasks that involve very high input dimensions such as video modeling; current approaches reduce the input dimensions using various feature extractors. To address this challenge, we propose a new, more general and efficient approach by factorizing the input-to-hidden weight matrix using Tensor-Train decomposition which is trained simultaneously with the weights themselves. We test our model on classification tasks using multiple real-world video datasets and achieve competitive performances with state-of-the-art models, even though our model architecture is orders of magnitude less complex. We believe that the proposed approach provides a novel and fundamental building block for modeling highdimensional sequential data with RNN architectures and opens up many possibilities to transfer the expressive and advanced architectures from other domains such as NLP to modeling highdimensional sequential data.",
"title": ""
},
{
"docid": "820768d9fc4e8f9fb4452e4aeeafd270",
"text": "Lateral epicondylitis (Tennis Elbow) is the most frequent type of myotendinosis and can be responsible for substantial pain and loss of function of the affected limb. Muscular biomechanics characteristics and equipment are important in preventing the conditions. This article present on overview of the current knowledge on lateral Epicondylitis and focuses on Etiology, Diagnosis and treatment strategies, conservative treatment are discussed and recent surgical techniques are outlined. This information should assist health care practitioners who treat patients with this disorder.",
"title": ""
},
{
"docid": "4301f65536c7dcb781e8337bfa99b1e6",
"text": "We present a method for the detection and representation of polysemous nouns, a phenomenon that has received little attention in NLP. The method is based on the exploitation of the semantic information preserved in Word Embeddings. We first prove that polysemous nouns instantiating a particular sense alternation form a separate class when clustering nouns in a lexicon. Such a class, however, does not include those polysemes in which a sense is strongly predominant. We address this problem and present a sense index that, for a given pair of lexical classes, defines the degree of membership of a noun to each class: polysemy is hence implicitly represented as an intermediate value on the continuum between two classes. We finally show that by exploiting the information provided by the sense index it is possible to accurately detect polysemous nouns in the dataset.",
"title": ""
}
] |
scidocsrr
|
5bbed6c30b7cef1945c29e36e8777be3
|
Intelligent irrigation system — An IOT based approach
|
[
{
"docid": "0ef58b9966c7d3b4e905e8306aad3359",
"text": "Agriculture is the back bone of India. To make the sustainable agriculture, this system is proposed. In this system ARM 9 processor is used to control and monitor the irrigation system. Different kinds of sensors are used. This paper presents a fully automated drip irrigation system which is controlled and monitored by using ARM9 processor. PH content and the nitrogen content of the soil are frequently monitored. For the purpose of monitoring and controlling, GSM module is implemented. The system informs user about any abnormal conditions like less moisture content and temperature rise, even concentration of CO2 via SMS through the GSM module.",
"title": ""
},
{
"docid": "a50f168329c1b44ed881e99d66fe7c13",
"text": "Indian agriculture is diverse; ranging from impoverished farm villages to developed farms utilizing modern agricultural technologies. Facility agriculture area in China is expanding, and is leading the world. However, its ecosystem control technology and system is still immature, with low level of intelligence. Promoting application of modern information technology in agriculture will solve a series of problems facing by farmers. Lack of exact information and communication leadsto the loss in production. Our paper is designed to over come these problems. This regulator provides an intelligent monitoring platform framework and system structure for facility agriculture ecosystem based on IOT[3]. This will be a catalyst for the transition from traditional farming to modern farming. This also provides opportunity for creating new technology and service development in IOT (internet of things) farming application. The Internet Of Things makes everything connected. Over 50 years since independence, India has made immense progress towards food productivity. The Indian population has tripled, but food grain production more than quadrupled[1]: there has thus been a substantial increase in available food grain per ca-pita. Modern agriculture practices have a great promise for the economic development of a nation. So we have brought-in an innovative project for the welfare of farmers and also for the farms. There are no day or night restrictions. This is helpful at any time.",
"title": ""
}
] |
[
{
"docid": "5251605df4db79f6a0fc2779a51938e2",
"text": "Drug bioavailability to the developing brain is a major concern in the treatment of neonates and infants as well as pregnant and breast-feeding women. Central adverse drug reactions can have dramatic consequences for brain development, leading to major neurological impairment. Factors setting the cerebral bioavailability of drugs include protein-unbound drug concentration in plasma, local cerebral blood flow, permeability across blood-brain interfaces, binding to neural cells, volume of cerebral fluid compartments, and cerebrospinal fluid secretion rate. Most of these factors change during development, which will affect cerebral drug concentrations. Regarding the impact of blood-brain interfaces, the blood-brain barrier located at the cerebral endothelium and the blood-cerebrospinal fluid barrier located at the choroid plexus epithelium both display a tight phenotype early on in embryos. However, the developmental regulation of some multispecific efflux transporters that also limit the entry of numerous drugs into the brain through barrier cells is expected to favor drug penetration in the neonatal brain. Finally, drug cerebral bioavailability is likely to be affected following perinatal injuries that alter blood-brain interface properties. A thorough investigation of these mechanisms is mandatory for a better risk assessment of drug treatments in pregnant or breast-feeding women, and in neonate and pediatric patients.",
"title": ""
},
{
"docid": "0b5ca91480dfff52de5c1d65c3b32f3d",
"text": "Spotting anomalies in large multi-dimensional databases is a crucial task with many applications in finance, health care, security, etc. We introduce COMPREX, a new approach for identifying anomalies using pattern-based compression. Informally, our method finds a collection of dictionaries that describe the norm of a database succinctly, and subsequently flags those points dissimilar to the norm---with high compression cost---as anomalies.\n Our approach exhibits four key features: 1) it is parameter-free; it builds dictionaries directly from data, and requires no user-specified parameters such as distance functions or density and similarity thresholds, 2) it is general; we show it works for a broad range of complex databases, including graph, image and relational databases that may contain both categorical and numerical features, 3) it is scalable; its running time grows linearly with respect to both database size as well as number of dimensions, and 4) it is effective; experiments on a broad range of datasets show large improvements in both compression, as well as precision in anomaly detection, outperforming its state-of-the-art competitors.",
"title": ""
},
{
"docid": "c9b9ac230838ffaff404784b66862013",
"text": "On the Mathematical Foundations of Theoretical Statistics. Author(s): R. A. Fisher. Source: Philosophical Transactions of the Royal Society of London. Series A Solutions to Exercises. 325. Bibliography. 347. Index Discrete mathematics is an essential part of the foundations of (theoretical) computer science, statistics . 2) Statistical Methods by S.P.Gupta. 3) Mathematical Statistics by Saxena & Kapoor. 4) Statistics by Sancheti & Kapoor. 5) Introduction to Mathematical Statistics Fundamentals of Mathematical statistics by Guptha, S.C &Kapoor, V.K (Sulthan chand. &sons). 2. Introduction to Mathematical statistics by Hogg.R.V and and .",
"title": ""
},
{
"docid": "bf65f2c68808755cfcd13e6cc7d0ccab",
"text": "Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.",
"title": ""
},
{
"docid": "3fcb9ab92334e3e214a7db08a93d5acd",
"text": "BACKGROUND\nA growing body of literature indicates that physical activity can have beneficial effects on mental health. However, previous research has mainly focussed on clinical populations, and little is known about the psychological effects of physical activity in those without clinically defined disorders.\n\n\nAIMS\nThe present study investigates the association between physical activity and mental health in an undergraduate university population based in the United Kingdom.\n\n\nMETHOD\nOne hundred students completed questionnaires measuring their levels of anxiety and depression using the Hospital Anxiety and Depression Scale (HADS) and their physical activity regime using the Physical Activity Questionnaire (PAQ).\n\n\nRESULTS\nSignificant differences were observed between the low, medium and high exercise groups on the mental health scales, indicating better mental health for those who engage in more exercise.\n\n\nCONCLUSIONS\nEngagement in physical activity can be an important contributory factor in the mental health of undergraduate students.",
"title": ""
},
{
"docid": "64d45fa63ac1ea987cec76bf69c4cc30",
"text": "Recently, community psychologists have re-vamped a set of 18 competencies considered important for how we practice community psychology. Three competencies are: (1) ethical, reflexive practice, (2) community inclusion and partnership, and (3) community education, information dissemination, and building public awareness. This paper will outline lessons I-a white working class woman academic-learned about my competency development through my research collaborations, using the lens of affective politics. I describe three lessons, from school-based research sites (elementary schools serving working class students of color and one elite liberal arts school serving wealthy white students). The first lesson, from an elementary school, concerns ethical, reflective practice. I discuss understanding my affect as a barometer of my ability to conduct research from a place of solidarity. The second lesson, which centers community inclusion and partnership, illustrates how I learned about the importance of \"before the beginning\" conversations concerning social justice and conflict when working in elementary schools. The third lesson concerns community education, information dissemination, and building public awareness. This lesson, from a college, taught me that I could stand up and speak out against classism in the face of my career trajectory being threatened. With these lessons, I flesh out key aspects of community practice competencies.",
"title": ""
},
{
"docid": "9d700ef057eb090336d761ebe7f6acb0",
"text": "This article presents initial results on a supervised machine learning approach to determine the semantics of noun compounds in Dutch and Afrikaans. After a discussion of previous research on the topic, we present our annotation methods used to provide a training set of compounds with the appropriate semantic class. The support vector machine method used for this classification experiment utilizes a distributional lexical semantics representation of the compound’s constituents to make its classification decision. The collection of words that occur in the near context of the constituent are considered an implicit representation of the semantics of this constituent. Fscores were reached of 47.8% for Dutch and 51.1% for Afrikaans. Keywords—compound semantics; Afrikaans; Dutch; machine learning; distributional methods",
"title": ""
},
{
"docid": "504377fd7a3b7c17d702d81d01a71bb6",
"text": "We propose a framework for multimodal sentiment analysis and emotion recognition using convolutional neural network-based feature extraction from text and visual modalities. We obtain a performance improvement of 10% over the state of the art by combining visual, text and audio features. We also discuss some major issues frequently ignored in multimodal sentiment analysis research: the role of speakerindependent models, importance of the modalities and generalizability. The paper thus serve as a new benchmark for further research in multimodal sentiment analysis and also demonstrates the different facets of analysis to be considered while performing such tasks.",
"title": ""
},
{
"docid": "c953895c57d8906736352698a55c24a9",
"text": "Data scientists and physicians are starting to use artificial intelligence (AI) even in the medical field in order to better understand the relationships among the huge amount of data coming from the great number of sources today available. Through the data interpretation methods made available by the recent AI tools, researchers and AI companies have focused on the development of models allowing to predict the risk of suffering from a specific disease, to make a diagnosis, and to recommend a treatment that is based on the best and most updated scientific evidence. Even if AI is used to perform unimaginable tasks until a few years ago, the awareness about the ongoing revolution has not yet spread through the medical community for several reasons including the lack of evidence about safety, reliability and effectiveness of these tools, the lack of regulation accompanying hospitals in the use of AI by health care providers, the difficult attribution of liability in case of errors and malfunctions of these systems, and the ethical and privacy questions that they raise and that, as of today, are still unanswered.",
"title": ""
},
{
"docid": "44cf5669d05a759ab21b3ebc1f6c340d",
"text": "Linear variable differential transformer (LVDT) sensors are widely used in hydraulic and pneumatic mechatronic systems for measuring physical quantities like displacement, force or pressure. The LVDT sensor consists of two magnetic coupled coils with a common core and this sensor converts the displacement of core into reluctance variation of magnetic circuit. LVDT sensors combines good accuracy (0.1 % error) with low cost, but they require relative complex electronics. Standard electronics for LVDT sensor conditioning is analog $the coupled coils constitute an inductive half-bridge supplied with 5 kHz sinus excitation from a quadrate oscillator. The output phase span is amplified and synchronous demodulated. This analog technology works well but has its drawbacks - hard to adjust, many components and packages, no connection to computer systems. To eliminate all these disadvantages, our team from \"Politehnica\" University of Bucharest has developed a LVDT signal conditioner using system on chip microcontroller MSP430F149 from Texas Instruments. This device integrates all peripherals required for LVDT signal conditioning (pulse width modulation modules, analog to digital converter, timers, enough memory resources and processing power) and offers also excellent low-power options. Resulting electronic module is a one-chip solution made entirely in SMD technology and its small dimensions allow its integration into sensor's body. Present paper focuses on specific issues of this digital solution for LVDT conditioning and compares it with classic analog solution from different points of view: error curve, power consumption, communication options, dimensions and production cost. Microcontroller software (firmware) and digital signal conditioning techniques for LVDT are also analyzed. Use of system on chip devices for signal conditioning allows realization of low cost compact transducers with same or better performances than their analog counterparts, but with extra options like serial communication channels, self-calibration, local storage of measured values and fault detection",
"title": ""
},
{
"docid": "8b5ad6c53d58feefe975e481e2352c52",
"text": "Virtual machine (VM) live migration is a critical feature for managing virtualized environments, enabling dynamic load balancing, consolidation for power management, preparation for planned maintenance, and other management features. However, not all virtual machine live migration is created equal. Variants include memory migration, which relies on shared backend storage between the source and destination of the migration, and storage migration, which migrates storage state as well as memory state. We have developed an automated testing framework that measures important performance characteristics of live migration, including total migration time, the time a VM is unresponsive during migration, and the amount of data transferred over the network during migration. We apply this testing framework and present the results of studying live migration, both memory migration and storage migration, in various virtualization systems including KVM, XenServer, VMware, and Hyper-V. The results provide important data to guide the migration decisions of both system administrators and autonomic cloud management systems.",
"title": ""
},
{
"docid": "8791b422ebeb347294db174168bab439",
"text": "Sleep is superior to waking for promoting performance improvements between sessions of visual perceptual and motor learning tasks. Few studies have investigated possible effects of sleep on auditory learning. A key issue is whether sleep specifically promotes learning, or whether restful waking yields similar benefits. According to the \"interference hypothesis,\" sleep facilitates learning because it prevents interference from ongoing sensory input, learning and other cognitive activities that normally occur during waking. We tested this hypothesis by comparing effects of sleep, busy waking (watching a film) and restful waking (lying in the dark) on auditory tone sequence learning. Consistent with recent findings for human language learning, we found that compared with busy waking, sleep between sessions of auditory tone sequence learning enhanced performance improvements. Restful waking provided similar benefits, as predicted based on the interference hypothesis. These findings indicate that physiological, behavioral and environmental conditions that accompany restful waking are sufficient to facilitate learning and may contribute to the facilitation of learning that occurs during sleep.",
"title": ""
},
{
"docid": "a583bbf2deac0bf99e2790c47598cddd",
"text": "We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel without interference of the global interpreter lock. As part of this project, we introduce BatchPPO, an efficient implementation of the proximal policy optimization algorithm. By open sourcing TensorFlow Agents, we hope to provide a flexible starting point for future projects that accelerates future research in the field.",
"title": ""
},
{
"docid": "54ef290e7c8fbc5c1bcd459df9bc4a06",
"text": "Augmenter of Liver Regeneration (ALR) is a sulfhydryl oxidase carrying out fundamental functions facilitating protein disulfide bond formation. In mammals, it also functions as a hepatotrophic growth factor that specifically stimulates hepatocyte proliferation and promotes liver regeneration after liver damage or partial hepatectomy. Whether ALR also plays a role during vertebrate hepatogenesis is unknown. In this work, we investigated the function of alr in liver organogenesis in zebrafish model. We showed that alr is expressed in liver throughout hepatogenesis. Knockdown of alr through morpholino antisense oligonucleotide (MO) leads to suppression of liver outgrowth while overexpression of alr promotes liver growth. The small-liver phenotype in alr morphants results from a reduction of hepatocyte proliferation without affecting apoptosis. When expressed in cultured cells, zebrafish Alr exists as dimer and is localized in mitochondria as well as cytosol but not in nucleus or secreted outside of the cell. Similar to mammalian ALR, zebrafish Alr is a flavin-linked sulfhydryl oxidase and mutation of the conserved cysteine in the CxxC motif abolishes its enzymatic activity. Interestingly, overexpression of either wild type Alr or enzyme-inactive Alr(C131S) mutant promoted liver growth and rescued the liver growth defect of alr morphants. Nevertheless, alr(C131S) is less efficacious in both functions. Meantime, high doses of alr MOs lead to widespread developmental defects and early embryonic death in an alr sequence-dependent manner. These results suggest that alr promotes zebrafish liver outgrowth using mechanisms that are dependent as well as independent of its sulfhydryl oxidase activity. This is the first demonstration of a developmental role of alr in vertebrate. It exemplifies that a low-level sulfhydryl oxidase activity of Alr is essential for embryonic development and cellular survival. The dose-dependent and partial suppression of alr expression through MO-mediated knockdown allows the identification of its late developmental role in vertebrate liver organogenesis.",
"title": ""
},
{
"docid": "8fa721c98dac13157bcc891c06561ec7",
"text": "Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total child-care is not yet being promoted, there are indications that it is „on the cards‟. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care.",
"title": ""
},
{
"docid": "b5f2b13b5266c30ba02ff6d743e4b114",
"text": "The increasing scale, technology advances and services of modern networks have dramatically complicated their management such that in the near future it will be almost impossible for human administrators to monitor them. To control this complexity, IBM has introduced a promising approach aiming to create self-managed systems. This approach, called Autonomic Computing, aims to design computing equipment able to self-adapt its configuration and to self-optimize its performance depending on its situation in order to fulfill high-level objectives defined by the human operator. In this paper, we present our autonomic network management architecture (ANEMA) that implements several policy forms to achieve autonomic behaviors in the network equipments. In ANEMA, the high-level objectives of the human administrators and the users are captured and expressed in terms of ‘Utility Function’ policies. The ‘Goal’ policies describe the high-level management directives needed to guide the network to achieve the previous utility functions. Finally, the ‘behavioral’ policies describe the behaviors that should be followed by network equipments to react to changes in their context and to achieve the given ‘Goal’ policies. In order to highlight the benefits of ANEMA architecture and the continuum of policies to introduce autonomic management in a multiservice IP network, a testbed has been implemented and several scenarios have been executed. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f9b110890c90d48b6d2f84aa419c1598",
"text": "Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a novel measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task. Our surprise-minimizing framework is suitable for learning in complex environments, even if the environment undergoes gradual or sudden changes, and it could eventually provide a framework to study the behavior of humans and animals as they encounter surprising events.",
"title": ""
},
{
"docid": "2871d80088d7cabd0cd5bdd5101e6018",
"text": "Owing to superior physical properties such as high electron saturation velocity and high electric breakdown field, GaN-based high electron mobility transistors (HEMTs) are capable of delivering superior performance in microwave amplifiers, high power switches, and high temperature integrated circuits (ICs). Compared to the conventional D-mode HEMTs with negative threshold voltages, enhancement-mode (E-mode) or normally-off HEMTs are desirable in these applications, for reduced circuit design complexity and fail-safe operation. Fluorine plasma treatment has been used to fabricate E-mode HEMTs [1], and is a robust process for the channel threshold voltage modulation. However, there is no standard equipment for this process and various groups have reported a wide range of process parameters [1–4]. In this work, we demonstrate the self-aligned enhancement-mode AlGaN/GaN HEMTs fabricated with a standard fluorine ion implantation. Ion implantation is widely used in semiconductor industry with well-controlled dose and precise implantation profile.",
"title": ""
},
{
"docid": "c62cc1b0a9c1c4cadede943b4cbd8050",
"text": "The problem of parsing has been studied extensively for various formal grammars. Given an input string and a grammar, the parsing problem is to check if the input string belongs to the language generated by the grammar. A closely related problem of great importance is one where the input are a string I and a grammar G and the task is to produce a string I ′ that belongs to the language generated by G and the ‘distance’ between I and I ′ is the smallest (from among all the strings in the language). Specifically, if I is in the language generated by G, then the output should be I. Any parser that solves this version of the problem is called an error correcting parser. In 1972 Aho and Peterson presented a cubic time error correcting parser for context free grammars. Since then this asymptotic time bound has not been improved under the (standard) assumption that the grammar size is a constant. In this paper we present an error correcting parser for context free grammars that runs in O(T (n)) time, where n is the length of the input string and T (n) is the time needed to compute the tropical product of two n× n matrices. In this paper we also present an n M -approximation algorithm for the language edit distance problem that has a run time of O(Mnω), where O(nω) is the time taken to multiply two n× n matrices. To the best of our knowledge, no approximation algorithms have been proposed for error correcting parsing for general context free grammars.",
"title": ""
},
{
"docid": "d64d589068d68ef19d7ac77ab55c8318",
"text": "Cloud computing is a revolutionary paradigm to deliver computing resources, ranging from data storage/processing to software, as a service over the network, with the benefits of efficient resource utilization and improved manageability. The current popular cloud computing models encompass a cluster of expensive and dedicated machines to provide cloud computing services, incurring significant investment in capital outlay and ongoing costs. A more cost effective solution would be to exploit the capabilities of an ad hoc cloud which consists of a cloud of distributed and dynamically untapped local resources. The ad hoc cloud can be further classified into static and mobile clouds: an ad hoc static cloud harnesses the underutilized computing resources of general purpose machines, whereas an ad hoc mobile cloud harnesses the idle computing resources of mobile devices. However, the dynamic and distributed characteristics of ad hoc cloud introduce challenges in system management. In this article, we propose a generic em autonomic mobile cloud (AMCloud) management framework for automatic and efficient service/resource management of ad hoc cloud in both static and mobile modes. We then discuss in detail the possible security and privacy issues in ad hoc cloud computing. A general security architecture is developed to facilitate the study of prevention and defense approaches toward a secure autonomic cloud system. This article is expected to be useful for exploring future research activities to achieve an autonomic and secure ad hoc cloud computing system.",
"title": ""
}
] |
scidocsrr
|
128fb3f7c2349f4e0d863b0a971a2752
|
A survey on information visualization: recent advances and challenges
|
[
{
"docid": "564675e793834758bd66e440b65be206",
"text": "While it is still most common for information visualization researchers to develop new visualizations from a data-or taskdriven perspective, there is growing interest in understanding the types of visualizations people create by themselves for personal use. As part of this recent direction, we have studied a large collection of whiteboards in a research institution, where people make active use of combinations of words, diagrams and various types of visuals to help them further their thought processes. Our goal is to arrive at a better understanding of the nature of visuals that are created spontaneously during brainstorming, thinking, communicating, and general problem solving on whiteboards. We use the qualitative approaches of open coding, interviewing, and affinity diagramming to explore the use of recognizable and novel visuals, and the interplay between visualization and diagrammatic elements with words, numbers and labels. We discuss the potential implications of our findings on information visualization design.",
"title": ""
},
{
"docid": "0a6a3e82b701bfbdbb73a9e8573fc94a",
"text": "Providing effective feedback on resource consumption in the home is a key challenge of environmental conservation efforts. One promising approach for providing feedback about residential energy consumption is the use of ambient and artistic visualizations. Pervasive computing technologies enable the integration of such feedback into the home in the form of distributed point-of-consumption feedback devices to support decision-making in everyday activities. However, introducing these devices into the home requires sensitivity to the domestic context. In this paper we describe three abstract visualizations and suggest four design requirements that this type of device must meet to be effective: pragmatic, aesthetic, ambient, and ecological. We report on the findings from a mixed methods user study that explores the viability of using ambient and artistic feedback in the home based on these requirements. Our findings suggest that this approach is a viable way to provide resource use feedback and that both the aesthetics of the representation and the context of use are important elements that must be considered in this design space.",
"title": ""
}
] |
[
{
"docid": "07ef9eece7de49ee714d4a2adf9bb078",
"text": "Vegetable oil has been proven to be advantageous as a non-toxic, cost-effective and biodegradable solvent to extract polycyclic aromatic hydrocarbons (PAHs) from contaminated soils for remediation purposes. The resulting vegetable oil contained PAHs and therefore required a method for subsequent removal of extracted PAHs and reuse of the oil in remediation processes. In this paper, activated carbon adsorption of PAHs from vegetable oil used in soil remediation was assessed to ascertain PAH contaminated oil regeneration. Vegetable oils, originating from lab scale remediation, with different PAH concentrations were examined to study the adsorption of PAHs on activated carbon. Batch adsorption tests were performed by shaking oil-activated carbon mixtures in flasks. Equilibrium data were fitted with the Langmuir and Freundlich isothermal models. Studies were also carried out using columns packed with activated carbon. In addition, the effects of initial PAH concentration and activated carbon dosage on sorption capacities were investigated. Results clearly revealed the effectiveness of using activated carbon as an adsorbent to remove PAHs from the vegetable oil. Adsorption equilibrium of PAHs on activated carbon from the vegetable oil was successfully evaluated by the Langmuir and Freundlich isotherms. The initial PAH concentrations and carbon dosage affected adsorption significantly. The results indicate that the reuse of vegetable oil was feasible.",
"title": ""
},
{
"docid": "20c3bfb61bae83494d7451b083bc2202",
"text": "Peripheral nerve hyperexcitability (PNH) syndromes can be subclassified as primary and secondary. The main primary PNH syndromes are neuromyotonia, cramp-fasciculation syndrome (CFS), and Morvan's syndrome, which cause widespread symptoms and signs without the association of an evident peripheral nerve disease. Their major symptoms are muscle twitching and stiffness, which differ only in severity between neuromyotonia and CFS. Cramps, pseudomyotonia, hyperhidrosis, and some other autonomic abnormalities, as well as mild positive sensory phenomena, can be seen in several patients. Symptoms reflecting the involvement of the central nervous system occur in Morvan's syndrome. Secondary PNH syndromes are generally seen in patients with focal or diffuse diseases affecting the peripheral nervous system. The PNH-related symptoms and signs are generally found incidentally during clinical or electrodiagnostic examinations. The electrophysiological findings that are very useful in the diagnosis of PNH are myokymic and neuromyotonic discharges in needle electromyography along with some additional indicators of increased nerve fiber excitability. Based on clinicopathological and etiological associations, PNH syndromes can also be classified as immune mediated, genetic, and those caused by other miscellaneous factors. There has been an increasing awareness on the role of voltage-gated potassium channel complex autoimmunity in primary PNH pathogenesis. Then again, a long list of toxic compounds and genetic factors has also been implicated in development of PNH. The management of primary PNH syndromes comprises symptomatic treatment with anticonvulsant drugs, immune modulation if necessary, and treatment of possible associated dysimmune and/or malignant conditions.",
"title": ""
},
{
"docid": "a0db56f55e2d291cb7cf871c064cf693",
"text": "It's being very important to listen to social media streams whether it's Twitter, Facebook, Messenger, LinkedIn, email or even company own application. As many customers may be using this streams to reach out to company because they need help. The company have setup social marketing team to monitor this stream. But due to huge volumes of users it's very difficult to analyses each and every social message and take a relevant action to solve users grievances, which lead to many unsatisfied customers or may even lose a customer. This papers proposes a system architecture which will try to overcome the above shortcoming by analyzing messages of each ejabberd users to check whether it's actionable or not. If it's actionable then an automated Chatbot will initiates conversation with that user and help the user to resolve the issue by providing a human way interactions using LUIS and cognitive services. To provide a highly robust, scalable and extensible architecture, this system is implemented on AWS public cloud.",
"title": ""
},
{
"docid": "19d4662287a5c3ce1cef85fa601b74ba",
"text": "This paper compares two approaches in identifying outliers in multivariate datasets; Mahalanobis distance (MD) and robust distance (RD). MD has been known suffering from masking and swamping effects and RD is an approach that was developed to overcome problems that arise in MD. There are two purposes of this paper, first is to identify outliers using MD and RD and the second is to show that RD performs better than MD in identifying outliers. An observation is classified as an outlier if MD or RD is larger than a cut-off value. Outlier generating model is used to generate a set of data and MD and RD are computed from this set of data. The results showed that RD can identify outliers better than MD. However, in non-outliers data the performance for both approaches are similar. The results for RD also showed that RD can identify multivariate outliers much better when the number of dimension is large.",
"title": ""
},
{
"docid": "ca6e39436be1b44ab0e20e0024cd0bbe",
"text": "This paper introduces a new approach, named micro-crowdfunding, for motivating people to participate in achieving a sustainable society. Increasing people's awareness of how they participate in maintaining the sustainability of common resources, such as public sinks, toilets, shelves, and office areas, is central to achieving a sustainable society. Micro-crowdfunding, as proposed in the paper, is a new type of community-based crowdsourcing architecture that is based on the crowdfunding concept and uses the local currency idea as a tool for encouraging people who live in urban environments to increase their awareness of how important it is to sustain small, common resources through their minimum efforts. Because our approach is lightweight and uses a mobile phone, people can participate in micro-crowdfunding activities with little effort anytime and anywhere.\n We present the basic concept of micro-crowdfunding and a prototype system. We also describe our experimental results, which show how economic and social factors are effective in facilitating micro-crowdfunding. Our results show that micro-crowdfunding increases the awareness about social sustainability, and we believe that micro-crowdfunding makes it possible to motivate people for achieving a sustainable society.",
"title": ""
},
{
"docid": "38935c773fb3163a1841fcec62b3e15a",
"text": "We investigate how neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that recursive neural networks can implement a generalising solution to this problem, and we visualise this solution by breaking it up in three steps: project, sum and squash. As a next step, we investigate recurrent neural networks, and show that a gated recurrent unit, that processes its input incrementally, also performs very well on this task: the network learns to predict the outcome of the arithmetic expressions with high accuracy, although performance deteriorates somewhat with increasing length. To develop an understanding of what the recurrent network encodes, visualisation techniques alone do not suffice. Therefore, we develop an approach where we formulate and test multiple hypotheses on the information encoded and processed by the network. For each hypothesis, we derive predictions about features of the hidden state representations at each time step, and train ‘diagnostic classifiers’ to test those predictions. Our results indicate that the networks follow a strategy similar to our hypothesised ‘cumulative strategy’, which explains the high accuracy of the network on novel expressions, the generalisation to longer expressions than seen in training, and the mild deterioration with increasing length. This is turn shows that diagnostic classifiers can be a useful technique for opening up the black box of neural networks. We argue that diagnostic classification, unlike most visualisation techniques, does scale up from small networks in a toy domain, to larger and deeper recurrent networks dealing with real-life data, and may therefore contribute to a better understanding of the internal dynamics of current state-of-the-art models in natural language processing.",
"title": ""
},
{
"docid": "6064bdefac3e861bcd46fa303b0756be",
"text": "Some models of textual corpora employ text generation methods involving n-gram statistics, while others use latent topic variables inferred using the \"bag-of-words\" assumption, in which word order is ignored. Previously, these methods have not been combined. In this work, I explore a hierarchical generative probabilistic model that incorporates both n-gram statistics and latent topic variables by extending a unigram topic model to include properties of a hierarchical Dirichlet bigram language model. The model hyperparameters are inferred using a Gibbs EM algorithm. On two data sets, each of 150 documents, the new model exhibits better predictive accuracy than either a hierarchical Dirichlet bigram language model or a unigram topic model. Additionally, the inferred topics are less dominated by function words than are topics discovered using unigram statistics, potentially making them more meaningful.",
"title": ""
},
{
"docid": "aa4e3c2db7f1a1ac749d5d34014e26a0",
"text": "In this paper, a novel text clustering technique is proposed to summarize text documents. The clustering method, so called ‘Ensemble Clustering Method’, combines both genetic algorithms (GA) and particle swarm optimization (PSO) efficiently and automatically to get the best clustering results. The summarization with this clustering method is to effectively avoid the redundancy in the summarized document and to show the good summarizing results, extracting the most significant and non-redundant sentence from clustering sentences of a document. We tested this technique with various text documents in the open benchmark datasets, DUC01 and DUC02. To evaluate the performances, we used F-measure and ROUGE. The experimental results show that the performance capability of our method is about 11% to 24% better than other summarization algorithms. Key-Words: Text Summarization; Extractive Summarization; Ensemble Clustering; Genetic Algorithms; Particle Swarm Optimization",
"title": ""
},
{
"docid": "5e7b935a73180c9ccad3bc0e82311503",
"text": "What happens if one pushes a cup sitting on a table toward the edge of the table? How about pushing a desk against a wall? In this paper, we study the problem of understanding the movements of objects as a result of applying external forces to them. For a given force vector applied to a specific location in an image, our goal is to predict long-term sequential movements caused by that force. Doing so entails reasoning about scene geometry, objects, their attributes, and the physical rules that govern the movements of objects. We design a deep neural network model that learns long-term sequential dependencies of object movements while taking into account the geometry and appearance of the scene by combining Convolutional and Recurrent Neural Networks. Training our model requires a large-scale dataset of object movements caused by external forces. To build a dataset of forces in scenes, we reconstructed all images in SUN RGB-D dataset in a physics simulator to estimate the physical movements of objects caused by external forces applied to them. Our Forces in Scenes (ForScene) dataset contains 10,335 images in which a variety of external forces are applied to different types of objects resulting in more than 65,000 object movements represented in 3D. Our experimental evaluations show that the challenging task of predicting longterm movements of objects as their reaction to external forces is possible from a single image.",
"title": ""
},
{
"docid": "39cb45c62b83a40f8ea42cb872a7aa59",
"text": "Levy flights are employed in a lattice model of contaminant migration by bioturbation, the reworking of sediment by benthic organisms. The model couples burrowing, foraging, and conveyor-belt feeding with molecular diffusion. The model correctly predicts a square-root dependence on bioturbation rates over a wide range of biomass densities. The model is used to predict the effect of bioturbation on the redistribution of contaminants in laboratory microcosms containing pyrene-inoculated sediments and the tubificid oligochaete Limnodrilus hoffmeisteri. The model predicts the dynamic flux from the sediment and in-bed concentration profiles that are consistent with observations. The sensitivity of flux and concentration profiles to the specific mechanisms of bioturbation are explored with the model. The flux of pyrene to the overlying water was largely controlled by the simulated foraging activities.",
"title": ""
},
{
"docid": "716f8cadac94110c4a00bc81480a4b66",
"text": "The last decade has witnessed the prevalence of sensor and GPS technologies that produce a sheer volume of trajectory data representing the motion history of moving objects. Measuring similarity between trajectories is undoubtedly one of the most important tasks in trajectory data management since it serves as the foundation of many advanced analyses such as similarity search, clustering, and classification. In this light, tremendous efforts have been spent on this topic, which results in a large number of trajectory similarity measures. Generally, each individual work introducing a new distance measure has made specific claims on the superiority of their proposal. However, for most works, the experimental study was focused on demonstrating the efficiency of the search algorithms, leaving the effectiveness aspect unverified empirically. In this paper, we conduct a comparative experimental study on the effectiveness of six widely used trajectory similarity measures based on a real taxi trajectory dataset. By applying a variety of transformations we designed for each original trajectory, our experimental observations demonstrate the advantages and drawbacks of these similarity measures in different circumstances.",
"title": ""
},
{
"docid": "7a2aef39046fe0704061195cc37a010a",
"text": "Conventional design of ferrite-cored inductor employs air gaps to store magnetic energy. In this work, the gap length is allowed to be smaller than the conventional value so that the nonlinear ferrite material is biased in the region with low permeability and, hence, significant energy density. A peak in the inductance-gap relationship has thus been uncovered where the total energy stored in the gaps and the core is maximized. A reluctance model is formulated to explain the peaking behavior, and is verified experimentally. Curves of inductance versus gap length are generated to aid the design of swinging inductance and reduce the core size.",
"title": ""
},
{
"docid": "8b41f536667fda5bfaf25d7ac8d71ab0",
"text": "Video question answering (VideoQA) always involves visual reasoning. When answering questions composing of multiple logic correlations, models need to perform multi-step reasoning. In this paper, we formulate multi-step reasoning in VideoQA as a new task to answer compositional and logical structured questions based on video content. Existing VideoQA datasets are inadequate as benchmarks for the multi-step reasoning due to limitations such as lacking logical structure and having language biases. Thus we design a system to automatically generate a large-scale dataset, namely SVQA (Synthetic Video Question Answering). Compared with other VideoQA datasets, SVQA contains exclusively long and structured questions with various spatial and temporal relations between objects. More importantly, questions in SVQA can be decomposed into human readable logical tree or chain layouts, each node of which represents a sub-task requiring a reasoning operation such as comparison or arithmetic. Towards automatic question answering in SVQA, we develop a new VideoQA model. Particularly, we construct a new attention module, which contains spatial attention mechanism to address crucial and multiple logical sub-tasks embedded in questions, as well as a refined GRU called ta-GRU (temporal-attention GRU) to capture the long-term temporal dependency and gather complete visual cues. Experimental results show the capability of multi-step reasoning of SVQA and the effectiveness of our model when compared with other existing models.",
"title": ""
},
{
"docid": "15881d5448e348c6e1a63e195daa68eb",
"text": "Bottleneck autoencoders have been actively researched as a solution to image compression tasks. However, we observed that bottleneck autoencoders produce subjectively low quality reconstructed images. In this work, we explore the ability of sparse coding to improve reconstructed image quality for the same degree of compression. We observe that sparse image compression produces visually superior reconstructed images and yields higher values of pixel-wise measures of reconstruction quality (PSNR and SSIM) compared to bottleneck autoencoders. In addition, we find that using alternative metrics that correlate better with human perception, such as feature perceptual loss and the classification accuracy, sparse image compression scores up to 18.06% and 2.7% higher, respectively, compared to bottleneck autoencoders. Although computationally much more intensive, we find that sparse coding is otherwise superior to bottleneck autoencoders for the same degree of compression.",
"title": ""
},
{
"docid": "f8f58b75c754f1ed41cdf223a59521b0",
"text": "Domain-invariant (view-invariant and modality-invariant) feature representation is essential for human action recognition. Moreover, given a discriminative visual representation, it is critical to discover the latent correlations among multiple actions in order to facilitate action modeling. To address these problems, we propose a multi-domain and multi-task learning (MDMTL) method to: 1) extract domain-invariant information for multi-view and multi-modal action representation and 2) explore the relatedness among multiple action categories. Specifically, we present a sparse transfer learning-based method to co-embed multi-domain (multi-view and multi-modality) data into a single common space for discriminative feature learning. Additionally, visual feature learning is incorporated into the multi-task learning framework, with the Frobenius-norm regularization term and the sparse constraint term, for joint task modeling and task relatedness-induced feature learning. To the best of our knowledge, MDMTL is the first supervised framework to jointly realize domain-invariant feature learning and task modeling for multi-domain action recognition. Experiments conducted on the INRIA Xmas Motion Acquisition Sequences data set, the MSR Daily Activity 3D (DailyActivity3D) data set, and the Multi-modal & Multi-view & Interactive data set, which is the most recent and largest multi-view and multi-model action recognition data set, demonstrate the superiority of MDMTL over the state-of-the-art approaches.",
"title": ""
},
{
"docid": "6c9c06604d5ef370b803bb54b4fe1e0c",
"text": "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of minibatch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.",
"title": ""
},
{
"docid": "d647470f1fd0ba1898ca766001d20de6",
"text": "Despite the fact that many people suffer from it, an unequivocal definition of dry nose (DN) is not available. Symptoms range from the purely subjective sensation of a rather dry nose to visible crusting of the (inner) nose (nasal mucosa), and a wide range of combinations are met with. Relevant diseases are termed rhinitis sicca anterior, primary and secondary rhinitis atrophicans, rhinitis atrophicans with foetor (ozena), and empty nose syndrome. The diagnosis is based mainly on the patient’s history, inspection of the external and inner nose, endoscopy of the nasal cavity (and paranasal sinuses) and the nasopharynx, with CT, allergy testing and microbiological swabs being performed where indicated. Treatment consists in the elimination of predisposing factors, moistening, removal of crusts, avoidance of injurious factors, care of the mucosa, treatment of infections and where applicable, correction of an over-large air space. Since the uncritical resection of the nasal turbinates is a significant and frequent factor in the genesis of dry nose, secondary RA and ENS, the inferior and middle turbinate should not be resected without adequate justification, and the simultaneous removal of both should not be done other than for a malignant condition. In this paper, we review both the aetiology and clinical presentation of the conditions associated with the symptom dry nose, and its conservative and surgical management.",
"title": ""
},
{
"docid": "6db5de1bb37513c3c251624947ee4e8f",
"text": "The proliferation of Ambient Intelligence (AmI) devices and services and their integration in smart environments creates the need for a simple yet effective way of controlling and communicating with them. Towards that direction, the application of the Trigger -- Action model has attracted a lot of research with many systems and applications having been developed following that approach. This work introduces ParlAmI, a multimodal conversational interface aiming to give its users the ability to determine the behavior of AmI environments, by creating rules using natural language as well as a GUI. The paper describes ParlAmI, its requirements and functionality, and presents the findings of a user-based evaluation which was conducted.",
"title": ""
},
{
"docid": "4bdcc552853c8b658762c0c5d509f362",
"text": "In this work, we study the problem of partof-speech tagging for Tweets. In contrast to newswire articles, Tweets are usually informal and contain numerous out-ofvocabulary words. Moreover, there is a lack of large scale labeled datasets for this domain. To tackle these challenges, we propose a novel neural network to make use of out-of-domain labeled data, unlabeled in-domain data, and labeled indomain data. Inspired by adversarial neural networks, the proposed method tries to learn common features through adversarial discriminator. In addition, we hypothesize that domain-specific features of target domain should be preserved in some degree. Hence, the proposed method adopts a sequence-to-sequence autoencoder to perform this task. Experimental results on three different datasets show that our method achieves better performance than state-of-the-art methods.",
"title": ""
},
{
"docid": "7e5b18a0356a89a0285f80a2224d8b12",
"text": "Machine recognition of a handwritten mathematical expression (HME) is challenging due to the ambiguities of handwritten symbols and the two-dimensional structure of mathematical expressions. Inspired by recent work in deep learning, we present Watch, Attend and Parse (WAP), a novel end-to-end approach based on neural network that learns to recognize HMEs in a two-dimensional layout and outputs them as one-dimensional character sequences in LaTeX format. Inherently unlike traditional methods, our proposed model avoids problems that stem from symbol segmentation, and it does not require a predefined expression grammar. Meanwhile, the problems of symbol recognition and structural analysis are handled, respectively, using a watcher and a parser. We employ a convolutional neural network encoder that takes HME images as input as the watcher and employ a recurrent neural network decoder equipped with an attention mechanism as the parser to generate LaTeX sequences. Moreover, the correspondence between the input expressions and the output LaTeX sequences is learned automatically by the attention mechanism. We validate the proposed approach on a benchmark published by the CROHME international competition. Using the official training dataset, WAP significantly outperformed the state-of-the-art method with an expression recognition accuracy of 46.55% on CROHME 2014 and 44.55% on CROHME 2016. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
92217306dcd4a413e3f60d0523ef15f5
|
The Controversy Surrounding The Man Who Would Be Queen: A Case History of the Politics of Science, Identity, and Sex in the Internet Age
|
[
{
"docid": "34cab0c02d5f5ec5183bd63c01f932c7",
"text": "Autogynephilia is defined as a male’s propensity to be sexually aroused by the thought or image of himself as female. Autogynephilia explains the desire for sex reassignment of some maleto-female (MtF) transsexuals. It can be conceptualized as both a paraphilia and a sexual orientation. The concept of autogynephilia provides an alternative to the traditional model of transsexualism that emphasizes gender identity. Autogynephilia helps explain mid-life MtF gender transition, progression from transvestism to transsexualism, the prevalence of other paraphilias among MtF transsexuals, and late development of sexual interest in male partners. Hormone therapy and sex reassignment surgery can be effective treatments in autogynephilic transsexualism. The concept of autogynephilia can help clinicians better understand MtF transsexual clients who recognize a strong sexual component to their gender dysphoria. (Journal of Gay & Lesbian Psychotherapy, 8(1/2), 2004, pp. 69-87.)",
"title": ""
}
] |
[
{
"docid": "18f2e2a5e1b4d51a0a05c559a11a023e",
"text": "A novel forward coupler using coupled composite right/left-handed (CRLH) transmission lines (TLs) is presented. Forward coupling is enhanced by the CRLH TLs, which have a considerable difference between the effective phase constants in the even and odd modes. A 10-dB forward coupler using the coupled CRLH TLs is simulated and experimentally demonstrated in the S-band. Its coupled-line length is reduced to half that of the conventional right-handed forward coupler with the same coupling.",
"title": ""
},
{
"docid": "63ddab85be58aa2b9576d9b540ac31ed",
"text": "BACKGROUND\nThe objective of this study was to translate and to test the reliability and validity of the 12-item General Health Questionnaire (GHQ-12) in Iran.\n\n\nMETHODS\nUsing a standard 'forward-backward' translation procedure, the English language version of the questionnaire was translated into Persian (Iranian language). Then a sample of young people aged 18 to 25 years old completed the questionnaire. In addition, a short questionnaire containing demographic questions and a single measure of global quality of life was administered. To test reliability the internal consistency was assessed by Cronbach's alpha coefficient. Validity was performed using convergent validity. Finally, the factor structure of the questionnaire was extracted by performing principal component analysis using oblique factor solution.\n\n\nRESULTS\nIn all 748 young people entered into the study. The mean age of respondents was 21.1 (SD = 2.1) years. Employing the recommended method of scoring (ranging from 0 to 12), the mean GHQ score was 3.7 (SD = 3.5). Reliability analysis showed satisfactory result (Cronbach's alpha coefficient = 0.87). Convergent validity indicated a significant negative correlation between the GHQ-12 and global quality of life scores as expected (r = -0.56, P < 0.0001). The principal component analysis with oblique rotation solution showed that the GHQ-12 was a measure of psychological morbidity with two-factor structure that jointly accounted for 51% of the variance.\n\n\nCONCLUSION\nThe study findings showed that the Iranian version of the GHQ-12 has a good structural characteristic and is a reliable and valid instrument that can be used for measuring psychological well being in Iran.",
"title": ""
},
{
"docid": "f702a8c28184a6d49cd2f29a1e4e7ea4",
"text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.",
"title": ""
},
{
"docid": "9122fa8d5332e98a012e1ede2f12b6cc",
"text": "Ghana’s banking system has experienced interesting developments in the past two decades. Products such as international funds transfer, consumer/hire purchase loan and travelers’ cheque, personal computer banking, telephone banking, internet banking, branchless banking, SMS banking have been developed (Abor, 2005). Automated teller machines (ATMs) have become common, giving clients the freedom to transact business at their own convenience (Abor, 2005; Hinson, Amidu and Ensah, 2006). The development of these products has brought fierce competition within the banking industry; as a result, the financial sector has to rethink the way business is carried out, because of this competitive edge. Such competitive edge is driven by business and technological factors especially improvement in telecommunication networks and advancement in computer technology in Ghana (Hinson, Amidu and Ensah, 2006). In business today, the power balance has shifted from supply to demand push. Technological factors; especially developments in information technology are as much a cause as an effect of the transformation to new ways of doing business (Beulen, Ribbers & Roos, 2006). As a result of these developments, traditional value chains are being unbundled (Parker, 1999). One may ask whether such contemporary ways are evident in the Ghanaian banking sector as well? Commercial banks in Ghana are no exception to these changing business trends. Consequently, outsourcing of services has now become paramount to banks in Ghana. IT outsourcing is a major part of outsourcing decisions in commercial banks in Ghana. In Abstract: Ghana’s banking sector is currently faced with swift competition due to the increasing number of players in the market. Over the past ten (10) years, the number of commercial banks in the country has doubled. Banks are faced with the challenge of developing innovative products and services, and also expand rapidly. Facilities management support services are critical to this trend of development. Commercial banks need to make a delicate choice between make or buy of these support services (use-in-house expert or outsource). Unarguably, the need for banks to concentrate on their core business of banking and finance and outsource other non-core services to enhance shareholders wealth cannot be over emphasized. Although outsourcing has gained global recognition, the practice is quite new to commercial banks in Ghana. In recent times, commercial banks have outsourced numerous non-core services such as ICT, janitorial services, security, and even part of bank’s human resources. Whereas outsourcing might come with some comparative advantages for the banks, there are still fears of some uncertainties. Focusing on literature on outsourcing and authors own perspective from the banking sector in Ghana, this paper present the key risks likely to come with outsourcing and what future directions ought to be if such risk are to be reduced to its barest minimum. The paper presents a theoretical framework for outsourcing, a platform for further research on outsourcing and for improvement of knowledge.",
"title": ""
},
{
"docid": "597c6ba95d7bf037983e82d91f6a1b74",
"text": "An effective solution of generating OAM-carrying radio beams with three polarizations is provided. Through the reasonable configuration of phased antenna array using elements with three polarizations, the OAM radio waves with three polarizations for different states can be generated. The vectors of electric fields with different OAM states for linear, as well as left or right circular polarizations are presented and analyzed in detail.",
"title": ""
},
{
"docid": "d00df5e0c5990c05d5a67e311586a68a",
"text": "The present research explored the controversial link between global self-esteem and externalizing problems such as aggression, antisocial behavior, and delinquency. In three studies, we found a robust relation between low self-esteem and externalizing problems. This relation held for measures of self-esteem and externalizing problems based on self-report, teachers' ratings, and parents' ratings, and for participants from different nationalities (United States and New Zealand) and age groups (adolescents and college students). Moreover, this relation held both cross-sectionally and longitudinally and after controlling for potential confounding variables such as supportive parenting, parent-child and peer relationships, achievement-test scores, socioeconomic status, and IQ. In addition, the effect of self-esteem on aggression was independent of narcissism, an important finding given recent claims that individuals who are narcissistic, not low in self-esteem, are aggressive. Discussion focuses on clarifying the relations among self-esteem, narcissism, and externalizing problems.",
"title": ""
},
{
"docid": "a1eeb5721d13b78abbeb46eac559f58f",
"text": "Immersive video offers the freedom to navigate inside virtualized environment. Instead of streaming the bulky immersive videos entirely, a viewport (also referred to as field of view, FoV) adaptive streaming is preferred. We often stream the high-quality content within current viewport, while reducing the quality of representation elsewhere to save the network bandwidth consumption. Consider that we could refine the quality when focusing on a new FoV, in this paper, we model the perceptual impact of the quality variations (through adapting the quantization stepsize and spatial resolution) with respect to the refinement duration, and yield a product of two closed-form exponential functions that well explain the joint quantization and resolution induced quality impact. Analytical model is crossvalidated using another set of data, where both Pearson and Spearman’s rank correlation coefficients are close to 0.98. Our work is devised to optimize the adaptive FoV streaming of the immersive video under limited network resource. Numerical results show that our proposed model significantly improves the quality of experience of users, with about 9.36% BD-Rate (Bjontegaard Delta Rate) improvement on average as compared to other representative methods, particularly under the limited bandwidth.",
"title": ""
},
{
"docid": "0ec337f7af66ede2a97ade80ce27c131",
"text": "The processing time required by a cryptographic primitive implemented in hardware is an important metric for its performance but it has not received much attention in recent publications on lightweight cryptography. Nevertheless, there are important applications for cost effective low-latency encryption. As the first step in the field, this paper explores the lowlatency behavior of hardware implementations of a set of block ciphers. The latency of the implementations is investigated as well as the trade-offs with other metrics such as circuit area, time-area product, power, and energy consumption. The obtained results are related back to the properties of the underlying cipher algorithm and, as it turns out, the number of rounds, their complexity, and the similarity of encryption and decryption procedures have a strong impact on the results. We provide a qualitative description and conclude with a set of recommendations for aspiring low-latency block cipher designers.",
"title": ""
},
{
"docid": "94e2bfa218791199a59037f9ea882487",
"text": "As a developing discipline, research results in the field of human computer interaction (HCI) tends to be \"soft\". Many workers in the field have argued that the advancement of HCI lies in \"hardening\" the field with quantitative and robust models. In reality, few theoretical, quantitative tools are available in user interface research and development. A rare exception to this is Fitts' law. Extending information theory to human perceptual-motor system, Paul Fitts (1954) found a logarithmic relationship that models speed accuracy tradeoffs in aimed movements. A great number of studies have verified and / or applied Fitts' law to HCI problems, such as pointing performance on a screen, making Fitts' law one of the most intensively studied topic in the HCI literature.",
"title": ""
},
{
"docid": "07631274713ad80653552767d2fe461c",
"text": "Life cycle assessment (LCA) methodology was used to determine the optimum municipal solid waste (MSW) management strategy for Eskisehir city. Eskisehir is one of the developing cities of Turkey where a total of approximately 750tons/day of waste is generated. An effective MSW management system is needed in this city since the generated MSW is dumped in an unregulated dumping site that has no liner, no biogas capture, etc. Therefore, five different scenarios were developed as alternatives to the current waste management system. Collection and transportation of waste, a material recovery facility (MRF), recycling, composting, incineration and landfilling processes were considered in these scenarios. SimaPro7 libraries were used to obtain background data for the life cycle inventory. One ton of municipal solid waste of Eskisehir was selected as the functional unit. The alternative scenarios were compared through the CML 2000 method and these comparisons were carried out from the abiotic depletion, global warming, human toxicity, acidification, eutrophication and photochemical ozone depletion points of view. According to the comparisons and sensitivity analysis, composting scenario, S3, is the more environmentally preferable alternative. In this study waste management alternatives were investigated only on an environmental point of view. For that reason, it might be supported with other decision-making tools that consider the economic and social effects of solid waste management.",
"title": ""
},
{
"docid": "bfe5c10940d4cccfb071598ed04020ac",
"text": "BACKGROUND\nKnowledge about quality of life and sexual health in patients with genital psoriasis is limited.\n\n\nOBJECTIVES\nWe studied quality of life and sexual function in a large group of patients with genital psoriasis by means of validated questionnaires. In addition, we evaluated whether sufficient attention is given by healthcare professionals to sexual problems in patients with psoriasis, as perceived by the patients.\n\n\nMETHODS\nA self-administered questionnaire was sent to 1579 members of the Dutch Psoriasis Association. Sociodemographic patient characteristics, medical data and scores of several validated questionnaires regarding quality of life (Dermatology Life Quality Index) and sexual health (Sexual Quality of Life Questionnaire for use in Men, International Index of Erectile Function, Female Sexual Distress Scale and Female Sexual Function Index) were collected and analysed.\n\n\nRESULTS\nThis study (n = 487) shows that psoriasis has a detrimental effect on quality of life and sexual health. Patients with genital lesions reported even significantly worse quality of life than patients without genital lesions (mean ± SD quality of life scores 8·5 ± 6·5 vs. 5·5 ± 4·6, respectively, P < 0·0001). Sexual distress and dysfunction are particularly prominent in women (reported by 37·7% and 48·7% of the female patients, respectively). Sexual distress is especially high when genital skin is affected (mean ± SD sexual distress score in patients with genital lesions 16·1 ± 12·1 vs. 10·1 ± 9·7 in patients without genital lesions, P = 0·001). The attention given to possible sexual problems in the psoriasis population by healthcare professionals is perceived as insufficient by patients.\n\n\nCONCLUSIONS\nIn addition to quality of life, sexual health is diminished in a considerable number of patients with psoriasis and particularly women with genital lesions have on average high levels of sexual distress. We underscore the need for physicians to pay attention to the impact of psoriasis on psychosocial and sexual health when treating patients for this skin disease.",
"title": ""
},
{
"docid": "647ff27223a27396ffc15c24c5ff7ef1",
"text": "Mobile phones are increasingly used for security sensitive activities such as online banking or mobile payments. This usually involves some cryptographic operations, and therefore introduces the problem of securely storing the corresponding keys on the phone. In this paper we evaluate the security provided by various options for secure storage of key material on Android, using either Android's service for key storage or the key storage solution in the Bouncy Castle library. The security provided by the key storage service of the Android OS depends on the actual phone, as it may or may not make use of ARM TrustZone features. Therefore we investigate this for different models of phones.\n We find that the hardware-backed version of the Android OS service does offer device binding -- i.e. keys cannot be exported from the device -- though they could be used by any attacker with root access. This last limitation is not surprising, as it is a fundamental limitation of any secure storage service offered from the TrustZone's secure world to the insecure world. Still, some of Android's documentation is a bit misleading here.\n Somewhat to our surprise, we find that in some respects the software-only solution of Bouncy Castle is stronger than the Android OS service using TrustZone's capabilities, in that it can incorporate a user-supplied password to secure access to keys and thus guarantee user consent.",
"title": ""
},
{
"docid": "0e56318633147375a1058a6e6803e768",
"text": "150/150). Large-scale distributed analyses of over 30,000 MRI scans recently detected common genetic variants associated with the volumes of subcortical brain structures. Scaling up these efforts, still greater computational challenges arise in screening the genome for statistical associations at each voxel in the brain, localizing effects using “image-wide genome-wide” testing (voxelwise GWAS, vGWAS). Here we benefit from distributed computations at multiple sites to meta-analyze genome-wide image-wide data, allowing private genomic data to stay at the site where it was collected. Site-specific tensorbased morphometry (TBM) is performed with a custom template for each site, using a multi channel registration. A single vGWAS testing 10 variants against 2 million voxels can yield hundreds of TB of summary statistics, which would need to be transferred and pooled for meta-analysis. We propose a 2-step method, which reduces data transfer for each site to a subset of SNPs and voxels guaranteed to contain all significant hits.",
"title": ""
},
{
"docid": "34cc70a2acf5680442f0511c50215d25",
"text": "Machine Learning has traditionally focused on narrow artificial intelligence solutions for specific problems. Despite this, we observe two trends in the state-of-the-art: One, increasing architectural homogeneity in algorithms and models. Two, algorithms having more general application: New techniques often beat many benchmarks simultaneously. We review the changes responsible for these trends and look to computational neuroscience literature to anticipate future progress.",
"title": ""
},
{
"docid": "12fe6e1217fb269eb2b7f93e76a35134",
"text": "In this paper, we propose to extend the recently introduced model-agnostic meta-learning algorithm (MAML, Finn et al., 2017) for lowresource neural machine translation (NMT). We frame low-resource translation as a metalearning problem, and we learn to adapt to low-resource languages based on multilingual high-resource language tasks. We use the universal lexical representation (Gu et al., 2018b) to overcome the input-output mismatch across different languages. We evaluate the proposed meta-learning strategy using eighteen European languages (Bg, Cs, Da, De, El, Es, Et, Fr, Hu, It, Lt, Nl, Pl, Pt, Sk, Sl, Sv and Ru) as source tasks and five diverse languages (Ro, Lv, Fi, Tr and Ko) as target tasks. We show that the proposed approach significantly outperforms the multilingual, transfer learning based approach (Zoph et al., 2016) and enables us to train a competitive NMT system with only a fraction of training examples. For instance, the proposed approach can achieve as high as 22.04 BLEU on Romanian-English WMT’16 by seeing only 16,000 translated words (⇠ 600 parallel sentences).",
"title": ""
},
{
"docid": "52bf46e7c0449a274c33765586a2e9a1",
"text": "A stand-alone direction finding RFID reader is developed for mobile robot applications employing a dual-directional antenna. By adding search and localization capabilities to the current state of RFID technology, robots will be able to acquire and dock to a static target in a real environment without requiring a map or landmarks. Furthermore, we demonstrate RFID-enabled tracking and following of a target moving unpredictably with a mobile robot. The RFID reader keeps the robot aware of the direction of arrival (DOA) of the signal of interest toward which the dual-directional antenna faces the target transponder. The simulation results show that the proposed RFID system can track in real time the movement of the target transponder. To verify the effectiveness of the system in a real environment, we perform a variety of experiments in a hallway including target tracking and following with a commercial mobile robot.",
"title": ""
},
{
"docid": "216f97a97d240456d36ec765fd45739e",
"text": "This paper explores the growing trend of using mobile technology in university classrooms, exploring the use of tablets in particular, to identify learning benefits faced by students. Students, acting on their efficacy beliefs, make decisions regarding technology’s influence in improving their education. We construct a theoretical model in which internal and external factors affect a student’s self-efficacy which in turn affects the extent of adoption of a device for educational purposes. Through qualitative survey responses of university students who were given an Apple iPad to keep for the duration of a university course we find high levels of self-efficacy leading to positive views of the technology’s learning enhancement capabilities. Student observations on the practicality of the technology, off-topic use and its effects, communication, content, and perceived market advantage of using a tablet are also explored.",
"title": ""
},
{
"docid": "0c6b1a6b8c3b421821b49a31e39943db",
"text": "This paper proposes an ignition system for real time detection of driver’s face recognition, finger print authentication as well as alcohol intoxication and subsequently alerting them. The main aim of this proposed system is to reduce the number of accidents due to driver’s drowsiness and alcohol intake to increase the transportation safety as well as protect the vehicle from theft. This proposed system contains 8-megapixels digital USB camera, Raspberry-pi loaded. Face detection is the important part of this project will be done using Open CV. [2] [3].",
"title": ""
},
{
"docid": "5c83df8ba41b37d86f46de7963798b2f",
"text": "Experiments show a primary role of extracellular potassium concentrations in neuronal hyperexcitability and in the generation of epileptiform bursting and depolarization blocks without synaptic mechanisms. We adopt a physiologically relevant hippocampal CA1 neuron model in a zero-calcium condition to better understand the function of extracellular potassium in neuronal seizurelike activities. The model neuron is surrounded by interstitial space in which potassium ions are able to accumulate. Potassium currents, Na{+}-K{+} pumps, glial buffering, and ion diffusion are regulatory mechanisms of extracellular potassium. We also consider a reduced model with a fixed potassium concentration. The bifurcation structure and spiking frequency of the two models are studied. We show that, besides hyperexcitability and bursting pattern modulation, the potassium dynamics can induce not only bistability but also tristability of different firing patterns. Our results reveal the emergence of the complex behavior of multistability due to the dynamical [K{+}]{o} modulation on neuronal activities.",
"title": ""
},
{
"docid": "e5b2857bfe745468453ef9dabbf5c527",
"text": "We assume that a high-dimensional datum, like an image, is a compositional expression of a set of properties, with a complicated non-linear relationship between the datum and its properties. This paper proposes a factorial mixture prior for capturing latent properties, thereby adding structured compositionality to deep generative models. The prior treats a latent vector as belonging to Cartesian product of subspaces, each of which is quantized separately with a Gaussian mixture model. Some mixture components can be set to represent properties as observed random variables whenever labeled properties are present. Through a combination of stochastic variational inference and gradient descent, a method for learning how to infer discrete properties in an unsupervised or semi-supervised way is outlined and empirically evaluated.",
"title": ""
}
] |
scidocsrr
|
1418ec82ce97fa32e4b51cf663172f69
|
Image denoising via adaptive soft-thresholding based on non-local samples
|
[
{
"docid": "c6a44d2313c72e785ae749f667d5453c",
"text": "Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti) + zi, i = 0; : : : ; n 1, ti = i=n, zi iid N(0; 1). The reconstruction f̂ n is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability f̂ n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.",
"title": ""
},
{
"docid": "db913c6fe42f29496e13aa05a6489c9b",
"text": "As a convex relaxation of the low rank matrix factorization problem, the nuclear norm minimization has been attracting significant research interest in recent years. The standard nuclear norm minimization regularizes each singular value equally to pursue the convexity of the objective function. However, this greatly restricts its capability and flexibility in dealing with many practical problems (e.g., denoising), where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, where the singular values are assigned different weights. The solutions of the WNNM problem are analyzed under different weighting conditions. We then apply the proposed WNNM algorithm to image denoising by exploiting the image nonlocal self-similarity. Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality.",
"title": ""
},
{
"docid": "4d9cf5a29ebb1249772ebb6a393c5a4e",
"text": "This paper presents a novel strategy for high-fidelity image restoration by characterizing both local smoothness and nonlocal self-similarity of natural images in a unified statistical manner. The main contributions are three-fold. First, from the perspective of image statistics, a joint statistical modeling (JSM) in an adaptive hybrid space-transform domain is established, which offers a powerful mechanism of combining local smoothness and nonlocal self-similarity simultaneously to ensure a more reliable and robust estimation. Second, a new form of minimization functional for solving the image inverse problem is formulated using JSM under a regularization-based framework. Finally, in order to make JSM tractable and robust, a new Split Bregman-based algorithm is developed to efficiently solve the above severely underdetermined inverse problem associated with theoretical proof of convergence. Extensive experiments on image inpainting, image deblurring, and mixed Gaussian plus salt-and-pepper noise removal applications verify the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
}
] |
[
{
"docid": "fc04f9bd523e3d2ca57ab3a8e730397b",
"text": "Interactive, distributed, and embedded systems often behave stochastically, for example, when inputs, message delays, or failures conform to a probability distribution. However, reasoning analytically about the behavior of complex stochastic systems is generally infeasible. While simulations of systems are commonly used in engineering practice, they have not traditionally been used to reason about formal specifications. Statistical model checking (SMC) addresses this weakness by using a simulation-based approach to reason about precise properties specified in a stochastic temporal logic. A specification for a communication system may state that within some time bound, the probability that the number of messages in a queue will be greater than 5 must be less than 0.01. Using SMC, executions of a stochastic system are first sampled, after which statistical techniques are applied to determine whether such a property holds. While the output of sample-based methods are not always correct, statistical inference can quantify the confidence in the result produced. In effect, SMC provides a more widely applicable and scalable alternative to analysis of properties of stochastic systems using numerical and symbolic methods. SMC techniques have been successfully applied to analyze systems with large state spaces in areas such as computer networking, security, and systems biology. In this article, we survey SMC algorithms, techniques, and tools, while emphasizing current limitations and tradeoffs between precision and scalability.",
"title": ""
},
{
"docid": "6b933bbad26efaf65724d0c923330e75",
"text": "This paper presents a 138-170 GHz active frequency doubler implemented in a 0.13 μm SiGe BiCMOS technology with a peak output power of 5.6 dBm and peak power-added efficiency of 7.6%. The doubler achieves a peak conversion gain of 4.9 dB and consumes only 36 mW of DC power at peak drive through the use of a push-push frequency doubling stage optimized for low drive power, along with a low-power output buffer. To the best of our knowledge, this doubler achieves the highest output power, efficiency, and fundamental frequency suppression of all D-band and G-band SiGe HBT frequency doublers to date.",
"title": ""
},
{
"docid": "deaa86a5fe696d887140e29d0b2ae22c",
"text": "The high prevalence of spinal stenosis results in a large volume of MRI imaging, yet interpretation can be time-consuming with high inter-reader variability even among the most specialized radiologists. In this paper, we develop an efficient methodology to leverage the subject-matter-expertise stored in large-scale archival reporting and image data for a deep-learning approach to fully-automated lumbar spinal stenosis grading. Specifically, we introduce three major contributions: (1) a natural-language-processing scheme to extract level-by-level ground-truth labels from free-text radiology reports for the various types and grades of spinal stenosis (2) accurate vertebral segmentation and disc-level localization using a U-Net architecture combined with a spine-curve fitting method, and (3) a multiinput, multi-task, and multi-class convolutional neural network to perform central canal and foraminal stenosis grading on both axial and sagittal imaging series inputs with the extracted report-derived labels applied to corresponding imaging level segments. This study uses a large dataset of 22796 disc-levels extracted from 4075 patients. We achieve state-ofthe-art performance on lumbar spinal stenosis classification and expect the technique will increase both radiology workflow efficiency and the perceived value of radiology reports for referring clinicians and patients.",
"title": ""
},
{
"docid": "af0b4e07ec7a60d0021e8bddde5e8b92",
"text": "Social Network Sites (SNSs) offer a plethora of privacy controls, but users rarely exploit all of these mechanisms, nor do they do so in the same manner. We demonstrate that SNS users instead adhere to one of a small set of distinct privacy management strategies that are partially related to their level of privacy feature awareness. Using advanced Factor Analysis methods on the self-reported privacy behaviors and feature awareness of 308 Facebook users, we extrapolate six distinct privacy management strategies, including: Privacy Maximizers, Selective Sharers, Privacy Balancers, Self-Censors, Time Savers/Consumers, and Privacy Minimalists and six classes of privacy proficiency based on feature awareness, ranging from Novices to Experts. We then cluster users on these dimensions to form six distinct behavioral profiles of privacy management strategies and six awareness profiles for privacy proficiency. We further analyze these privacy profiles to suggest opportunities for training and education, interface redesign, and new approaches for personalized privacy recommendations.",
"title": ""
},
{
"docid": "0fefdbc0dbe68391ccfc912be937f4fc",
"text": "Privacy and security are essential requirements in practical biometric systems. In order to prevent the theft of biometric patterns, it is desired to modify them through revocable and non invertible transformations called Cancelable Biometrics. In this paper, we propose an efficient algorithm for generating a Cancelable Iris Biometric based on Sectored Random Projections. Our algorithm can generate a new pattern if the existing one is stolen, retain the original recognition performance and prevent extraction of useful information from the transformed patterns. Our method also addresses some of the drawbacks of existing techniques and is robust to degradations due to eyelids and eyelashes.",
"title": ""
},
{
"docid": "5bd9b0de217f2a537a5fadf99931d149",
"text": "A linear programming (LP) method for security dispatch and emergency control calculations on large power systems is presented. The method is reliable, fast, flexible, easy to program, and requires little computer storage. It works directly with the normal power-system variables and limits, and incorporates the usual sparse matrix techniques. An important feature of the method is that it handles multi-segment generator cost curves neatly and efficiently.",
"title": ""
},
{
"docid": "968ea2dcfd30492a81a71be25f16e350",
"text": "Tree-structured data are becoming ubiquitous nowadays and manipulating them based on similarity is essential for many applications. The generally accepted similarity measure for trees is the edit distance. Although similarity search has been extensively studied, searching for similar trees is still an open problem due to the high complexity of computing the tree edit distance. In this paper, we propose to transform tree-structured data into an approximate numerical multidimensional vector which encodes the original structure information. We prove that the L1 distance of the corresponding vectors, whose computational complexity is O(|T1| + |T2|), forms a lower bound for the edit distance between trees. Based on the theoretical analysis, we describe a novel algorithm which embeds the proposed distance into a filter-and-refine framework to process similarity search on tree-structured data. The experimental results show that our algorithm reduces dramatically the distance computation cost. Our method is especially suitable for accelerating similarity query processing on large trees in massive datasets.",
"title": ""
},
{
"docid": "aac5f1bd2459a19c42bb0c48e99e22f0",
"text": "This study examined multiple levels of adolescents' interpersonal functioning, including general peer relations (peer crowd affiliations, peer victimization), and qualities of best friendships and romantic relationships as predictors of symptoms of depression and social anxiety. An ethnically diverse sample of 421 adolescents (57% girls; 14 to 19 years) completed measures of peer crowd affiliation, peer victimization, and qualities of best friendships and romantic relationships. Peer crowd affiliations (high and low status), positive qualities in best friendships, and the presence of a dating relationship protected adolescents against feelings of social anxiety, whereas relational victimization and negative interactions in best friendships predicted high social anxiety. In contrast, affiliation with a high-status peer crowd afforded some protection against depressive affect; however, relational victimization and negative qualities of best friendships and romantic relationships predicted depressive symptoms. Some moderating effects for ethnicity were observed. Findings indicate that multiple aspects of adolescents' social relations uniquely contribute to feelings of internal distress. Implications for research and preventive interventions are discussed.",
"title": ""
},
{
"docid": "0cd46ebc56a6f640931ac4a81676968f",
"text": "An improved direct torque controlled induction motor drive is reported in this paper. It is established that the conventional direct torque controlled drive has more torque and flux ripples in steady state, which result in poor torque response, acoustic noise and incorrect speed estimations. Hysteresis controllers also make the switching frequency of voltage source inverter a variable quantity. A strategy of variable duty ratio control scheme is proposed to increase switching frequency, and adjust the width of hysteresis bands according to the switching frequency. This technique minimizes torque and current ripples, improves torque response, and reduces switching losses in spite of its simplicity. Simulation results establish the improved performance of the proposed direct torque control method compared to conventional methods.",
"title": ""
},
{
"docid": "3177e9dd683fdc66cbca3bd985f694b1",
"text": "Online communities allow millions of people who would never meet in person to interact. People join web-based discussion boards, email lists, and chat rooms for friendship, social support, entertainment, and information on technical, health, and leisure activities [24]. And they do so in droves. One of the earliest networks of online communities, Usenet, had over nine million unique contributors, 250 million messages, and approximately 200,000 active groups in 2003 [27], while the newer MySpace, founded in 2003, attracts a quarter million new members every day [27].",
"title": ""
},
{
"docid": "18216c0745ae3433b3b7f89bb7876a49",
"text": "This paper presents research using full body skeletal movements captured using video-based sensor technology developed by Vicon Motion Systems, to train a machine to identify different human emotions. The Vicon system uses a series of 6 cameras to capture lightweight markers placed on various points of the body in 3D space, and digitizes movement into x, y, and z displacement data. Gestural data from five subjects was collected depicting four emotions: sadness, joy, anger, and fear. Experimental results with different machine learning techniques show that automatic classification of this data ranges from 84% to 92% depending on how it is calculated. In order to put these automatic classification results into perspective a user study on the human perception of the same data was conducted with average classification accuracy of 93%.",
"title": ""
},
{
"docid": "695264db0ca1251ab0f63b04d41c68cd",
"text": "Reading comprehension tasks test the ability of models to process long-term context and remember salient information. Recent work has shown that relatively simple neural methods such as the Attention Sum-Reader can perform well on these tasks; however, these systems still significantly trail human performance. Analysis suggests that many of the remaining hard instances are related to the inability to track entity-references throughout documents. This work focuses on these hard entity tracking cases with two extensions: (1) additional entity features, and (2) training with a multi-task tracking objective. We show that these simple modifications improve performance both independently and in combination, and we outperform the previous state of the art on the LAMBADA dataset, particularly on difficult entity examples.",
"title": ""
},
{
"docid": "cbc6986bf415292292b7008ae4d13351",
"text": "In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.",
"title": ""
},
{
"docid": "827e9045f932b146a8af66224e114be6",
"text": "Using a common set of attributes to determine which methodology to use in a particular data warehousing project.",
"title": ""
},
{
"docid": "89267dbf693643ea53696c7d545254ea",
"text": "Cognitive dissonance theory is applicable to very limited areas of consumer behavior according to the author. Published findings in support of the theory are equivocal; they fail to show that cognitive dissonance is the only possible cause of observed \"dissonance-reducing\" behavior. Experimental evidences are examined and their weaknesses pointed out by the author to justify his position. He also provides suggestions regarding the circumstances under which dissonance reduction may be useful in increasing the repurchase probability of a purchased brand.",
"title": ""
},
{
"docid": "c68633905f8bbb759c71388819e9bfa9",
"text": "An additional mechanical mechanism for a passive parallelogram-based exoskeleton arm-support is presented. It consists of several levers and joints and an attached extension coil spring. The additional mechanism has two favourable features. On the one hand it exhibits an almost iso-elastic behaviour whereby the lifting force of the mechanism is constant for a wide working range. Secondly, the value of the supporting force can be varied by a simple linear movement of a supporting joint. Furthermore a standard tension spring can be used to gain the desired behavior. The additional mechanism is a 4-link mechanism affixed to one end of the spring within the parallelogram arm-support. It has several geometrical parameters which influence the overall behaviour. A standard optimisation routine with constraints on the parameters is used to find an optimal set of geometrical parameters. Based on the optimized geometrical parameters a prototype was constructed and tested. It is a lightweight wearable system, with a weight of 1.9 kg. Detailed experiments reveal a difference between measured and calculated forces. These variations can be explained by a 60 % higher pre load force of the tension spring and a geometrical offset in the construction.",
"title": ""
},
{
"docid": "70ba0f4938630e07d9b145216a01177a",
"text": "For some decades radiation therapy has been proved successful in cancer treatment. It is the major task of clinical radiation treatment planning to realise on the one hand a high level dose of radiation in the cancer tissue in order to obtain maximum tumour control. On the other hand it is obvious that it is absolutely necessary to keep in the tissue outside the tumour, particularly in organs at risk, the unavoidable radiation as low as possible. No doubt, these two objectives of treatment planning – high level dose in the tumour, low radiation outside the tumour – have a basically contradictory nature. Therefore, it is no surprise that inverse mathematical models with dose distribution bounds tend to be infeasible in most cases. Thus, there is need for approximations compromising between overdosing the organs at risk and underdosing the target volume. Differing from the currently used time consuming iterative approach, which measures deviation from an ideal (non-achievable) treatment plan using recursively trial-and-error weights for the organs of interest, we go a new way trying to avoid a priori weight choices and consider the treatment planning problem as a multiple objective linear programming problem: with each organ of interest, target tissue as well as organs at risk, we associate an objective function measuring the maximal deviation from the prescribed doses. We build up a data base of relatively few efficient solutions representing and approximating the variety of Pareto solutions of the multiple objective linear programming problem. This data base can be easily scanned by physicians looking for an adequate treatment plan with the aid of an appropriate online tool. 1 The inverse radiation treatment problem – an introduction Every year, in Germany about 450.000 individuals are diagnosed with life-threatening forms of cancer. About 60% of these patients are treated with radiation; half of them are considered curable because their tumours are localised and susceptible to radiation. Nevertheless, despite the use of the best radiation therapy methods available, one third of these “curable” patients – nearly 40.000 people each year – die with primary tumours still active at the original site. Why does this occur ? Experts in the field have looked at the reasons for these failures and have concluded that radiation therapy planning – in particular in complicated anatomical situations – is often inadequate, providing either too little radiation to the tumour or too much radiation to nearby healthy tissue. Effective radiation therapy planning for treating malignent tumours is always a tightrope walk between ineffective underdose of tumour tissue – the target volume – and dangerous overdose of organs at risk being relevant for maintaining life quality of the cured patient. Therefore, it is the challenging task of a radiation therapy planner to realise a certain high dose level conform to the shape of the target volume in order to have a good prognosis for tumour control and to avoid overdose in relevant healthy tissue nearby. Part of this challenge is the computer aided representation of the relevant parts of the body. Modern scanning methods like computer tomography (CT), magnetic resonance tomography 1 on sabbatical leave at the Department of Engineering Science, University of Auckland, New Zealand",
"title": ""
},
{
"docid": "f5b02bdd74772ff2454a475e44077c8e",
"text": "This paper presents a new method - adversarial advantage actor-critic (Adversarial A2C), which significantly improves the efficiency of dialogue policy learning in task-completion dialogue systems. Inspired by generative adversarial networks (GAN), we train a discriminator to differentiate responses/actions generated by dialogue agents from responses/actions by experts. Then, we incorporate the discriminator as another critic into the advantage actor-critic (A2C) framework, to encourage the dialogue agent to explore state-action within the regions where the agent takes actions similar to those of the experts. Experimental results in a movie-ticket booking domain show that the proposed Adversarial A2C can accelerate policy exploration efficiently.",
"title": ""
},
{
"docid": "5a2c04519e5e810daed299140a0c398c",
"text": "Satisfying stringent customer requirement of visually detectable solder joint termination for high reliability applications requires the implementation of robust wettable flank strategies. One strategy involves the exposition of the sidewall via partial-cut singulation, where the exposed surface could be made wettable through tin (Sn) electroplating process. Herein, we report our systematic approach in evaluating the viability of mechanical partial-cut singulation process to produce Sn-plateable sidewalls, enabling the wettable flank technology using an automotive QFN package are technology carrier. Optimization DOE produced robust set of parameters showing that mechanical partial cut is a promising solution to produce sidewalls appropriate for Sn electroplating, synergistically yielding excellent wettable flanks.",
"title": ""
}
] |
scidocsrr
|
0709b44f286c2cd2bcc9c3ce4248f8f6
|
A Compact Dual-Band Fork-Shaped Monopole Antenna for Bluetooth and UWB Applications
|
[
{
"docid": "76e374d5a1e71822e1d72632136ad9f2",
"text": "This paper proposes two novel broadband microstrip antennas using coplanar feed-line. By feeding the patch with a suitable shape of the coplanar line in the slot of the patch, the broadband character is achieved. Compared with the antenna fed by a U-shaped feed-line, the antenna with L-shaped feed-line not only has wider bandwidth but also achieves the circular polarization character. The measured bandwidths of 25% and 34% are achieved, and both of the antennas have good radiation characteristics in the work band.",
"title": ""
}
] |
[
{
"docid": "04d66f58cea190d7d7ec8654b6c81d3b",
"text": "Lymphedema is a chronic, progressive condition caused by an imbalance of lymphatic flow. Upper extremity lymphedema has been reported in 16-40% of breast cancer patients following axillary lymph node dissection. Furthermore, lymphedema following sentinel lymph node biopsy alone has been reported in 3.5% of patients. While the disease process is not new, there has been significant progress in the surgical care of lymphedema that can offer alternatives and improvements in management. The purpose of this review is to provide a comprehensive update and overview of the current advances and surgical treatment options for upper extremity lymphedema.",
"title": ""
},
{
"docid": "6e1150266afa87b1145ce3a4777732cd",
"text": "Procedural models are widely used in computer graphics for generating realistic, natural-looking textures. However, these mathematical models are not perceptually meaningful, whereas the users, such as artists and designers, would prefer to make descriptions using intuitive and perceptual characteristics like \"repetitive,\" \"directional,\" \"structured,\" and so on. To make up for this gap, we investigated the perceptual dimensions of textures generated by a collection of procedural models. Two psychophysical experiments were conducted: free-grouping and rating. We applied Hierarchical Cluster Analysis (HCA) and Singular Value Decomposition (SVD) to discover the perceptual features used by the observers in grouping similar textures. The results suggested that existing dimensions in literature cannot accommodate random textures. We therefore utilized isometric feature mapping (Isomap) to establish a three-dimensional perceptual texture space which better explains the features used by humans in texture similarity judgment. Finally, we proposed computational models to map perceptual features to the perceptual texture space, which can suggest a procedural model to produce textures according to user-defined perceptual scales.",
"title": ""
},
{
"docid": "f81cd7e1cfbfc15992fba9368c1df30b",
"text": "The most challenging issue of conventional Time Amplifiers (TAs) is their limited Dynamic Range (DR). This paper presents a mathematical analysis to clarify principle of operation of conventional 2× TA's. The mathematical derivations release strength reduction of the current sources of the TA is the simplest way to increase DR. Besides, a new technique is presented to expand the Dynamic Range (DR) of conventional 2× TAs. Proposed technique employs current subtraction in place of changing strength of current sources using conventional gain compensation methods, which results in more stable gain over a wider DR. The TA is simulated using Spectre-rf in TSMC 0.18um COMS technology. DR of the 2× TA is expanded to 300ps only with 9% gain error while it consumes only 28uW from a 1.2V supply voltage.",
"title": ""
},
{
"docid": "6d8e78d8c48aab17aef0b9e608f13b99",
"text": "Optimal real-time distributed V2G and G2V management of electric vehicles Sonja Stüdli, Emanuele Crisostomi, Richard Middleton & Robert Shorten a Centre for Complex Dynamic Systems and Control, The University of Newcastle, New South Wales, Australia b Department of Energy, Systems, Territory and Constructions, University of Pisa, Pisa, Italy c IBM Research, Dublin, Ireland Accepted author version posted online: 10 Dec 2013.Published online: 05 Feb 2014.",
"title": ""
},
{
"docid": "7d8617c12c24e61b7ef003a5055fbf2f",
"text": "We present the first approximation algorithms for a large class of budgeted learning problems. One classicexample of the above is the budgeted multi-armed bandit problem. In this problem each arm of the bandithas an unknown reward distribution on which a prior isspecified as input. The knowledge about the underlying distribution can be refined in the exploration phase by playing the arm and observing the rewards. However, there is a budget on the total number of plays allowed during exploration. After this exploration phase,the arm with the highest (posterior) expected reward is hosen for exploitation. The goal is to design the adaptive exploration phase subject to a budget constraint on the number of plays, in order to maximize the expected reward of the arm chosen for exploitation. While this problem is reasonably well understood in the infinite horizon discounted reward setting, the budgeted version of the problem is NP-Hard. For this problem and several generalizations, we provide approximate policies that achieve a reward within constant factor of the reward optimal policy. Our algorithms use a novel linear program rounding technique based on stochastic packing.",
"title": ""
},
{
"docid": "e6788f228c52f48107804622aab297c4",
"text": "Scholarly publishing increasingly requires automated systems that semantically enrich documents in order to support management and quality assessment of scientific output. However, contextual information, such as the authors’ affiliations, references, and funding agencies, is typically hidden within PDF files. To access this information we have developed a processing pipeline that analyses the structure of a PDF document incorporating a diverse set of machine learning techniques. First, unsupervised learning is used to extract contiguous text blocks from the raw character stream as the basic logical units of the article. Next, supervised learning is employed to classify blocks into different meta-data categories, including authors and affiliations. Then, a set of heuristics are applied to detect the reference section at the end of the paper and segment it into individual reference strings. Sequence classification is then utilised to categorise the tokens of individual references to obtain information such as the journal and the year of the reference. Finally, we make use of named entity recognition techniques to extract references to research grants, funding agencies, and EU projects. Our system is modular in nature. Some parts rely on models learnt on training data, and the overall performance scales with the quality of these data sets.",
"title": ""
},
{
"docid": "966fa8e8eaf66201494633e582e11a31",
"text": "This paper describes the development of a noninvasive blood pressure measurement (NIBP) device based on the oscillometric principle. The device is composed of an arm cuff, an air-pumping motor, a solenoid valve, a pressure transducer, and a 2×16 characters LCD display module and a microcontroller which acts as the central controller and processor for the hardware. In the development stage, an auxiliary instrumentation for signal acquisition and digital signal processing using LabVIEW, which is also known as virtual instrument (VI), is incorporated for learning and experimentation purpose. Since the most problematic part of metrological evaluation of an oscillometric NIBP system is in the proprietary algorithms of determining systolic blood pressure (SBP) and diastolic blood pressure (DBP), the amplitude algorithm is used. The VI is a useful tool for studying data acquisition and signal processing to determine SBP and DBP from the maximum of the oscillations envelope. The knowledge from VI procedures is then adopted into a stand alone NIBP device. SBP and DBP are successfully obtained using the circuit developed for the NIBP device. The work done is a proof of design concept that requires further refinement.",
"title": ""
},
{
"docid": "09819b576716ae71644b08464d992d03",
"text": "We show empirically that in SGD training of deep neural networks, one can, at no or nearly no loss of accuracy, quantize the gradients aggressively—to but one bit per value—if the quantization error is carried forward across minibatches (error feedback). This size reduction makes it feasible to parallelize SGD through data-parallelism with fast processors like recent GPUs. We implement data-parallel deterministically distributed SGD by combining this finding with AdaGrad, automatic minibatch-size selection, double buffering, and model parallelism. Unexpectedly, quantization benefits AdaGrad, giving a small accuracy gain. For a typical Switchboard DNN with 46M parameters, we reach computation speeds of 27k frames per second (kfps) when using 2880 samples per minibatch, and 51kfps with 16k, on a server with 8 K20X GPUs. This corresponds to speed-ups over a single GPU of 3.6 and 6.3, respectively. 7 training passes over 309h of data complete in under 7h. A 160M-parameter model training processes 3300h of data in under 16h on 20 dual-GPU servers—a 10 times speed-up—albeit at a small accuracy loss.",
"title": ""
},
{
"docid": "14efecfd6ecfd5ecbcd6a4131ab7ff80",
"text": "INTRODUCTION\nAlthough craving plays an important role in relapse, there are few brief, valid and reliable instruments to measure the desire to use cocaine in routine clinical practice. The 45-item Cocaine Craving Questionnaire-Now (CCQ-Now) is widely used in research, but its length makes its use in everyday clinical work relatively impractical. This study sought to determine the psychometric properties of the CCQ-Brief, a measure composed of 10 items from the CCQ-Now, in treatment-seeking cocaine abusers.\n\n\nMETHOD\nSubjects with cocaine abuse or dependence (n=247) completed the CCQ-Brief, the CCQ-Now, the Voris Cocaine Craving Scale, the Beck Depression Inventory-II, the Beck Anxiety Inventory, and the Addiction Severity Index.\n\n\nRESULTS\nThe CCQ-Brief was significantly correlated with the CCQ-Now (r=.85, p<.01), the CCQ-Now with the items in common with the CCQ-Brief removed (r=.78, p<.01), all four subscales of the VCCS (craving intensity: r=.47, p<.01; mood: r=.27, p<.01; energy: r=.30, p<.01; sick feelings: r=.28, p<.01), the BDI-II (r=.39, p<.01), the BAI (r=.35, p<.01) and recent drug use (r=.26, p<.01). The internal consistency of the CCQ-Brief was strong (alpha=.90).\n\n\nDISCUSSION\nThe CCQ-Brief is a valid and reliable instrument that can be easily administered as a measure of current cocaine craving.",
"title": ""
},
{
"docid": "d2dc58cd947d0ff456687916031245a6",
"text": "A multi-story can be generated by the interactions of users in the interactive storytelling system. In this paper, we suggest narrative structure and corresponding Storytelling Markup Language. Actor, Action, and Constraint are declared and programmed using interactive storytelling system which generates the stories. Generated stories can be transformed to multimedia formats which are texts, images, animations, and others.",
"title": ""
},
{
"docid": "cb6d60c4948bcf2381cb03a0e7dc8312",
"text": "While humor has been historically studied from a psychological, cognitive and linguistic standpoint, its study from a computational perspective is an area yet to be explored in Computational Linguistics. There exist some previous works, but a characterization of humor that allows its automatic recognition and generation is far from being specified. In this work we build a crowdsourced corpus of labeled tweets, annotated according to its humor value, letting the annotators subjectively decide which are humorous. A humor classifier for Spanish tweets is assembled based on supervised learning, reaching a precision of 84% and a recall of 69%.",
"title": ""
},
{
"docid": "52d6711ebbafd94ab5404e637db80650",
"text": "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.",
"title": ""
},
{
"docid": "4c49cebd579b2fef196d7ce600b1a044",
"text": "A GPU cluster is a cluster equipped with GPU devices. Excellent acceleration is achievable for computation-intensive tasks (e. g. matrix multiplication and LINPACK) and bandwidth-intensive tasks with data locality (e. g. finite-difference simulation). Bandwidth-intensive tasks such as large-scale FFTs without data locality are harder to accelerate, as the bottleneck often lies with the PCI between main memory and GPU device memory or the communication network between workstation nodes. That means optimizing the performance of FFT for a single GPU device will not improve the overall performance. This paper uses large-scale FFT as an example to show how to achieve substantial speedups for these more challenging tasks on a GPU cluster. Three GPU-related factors lead to better performance: firstly the use of GPU devices improves the sustained memory bandwidth for processing large-size data; secondly GPU device memory allows larger subtasks to be processed in whole and hence reduces repeated data transfers between memory and processors; and finally some costly main-memory operations such as matrix transposition can be significantly sped up by GPUs if necessary data adjustment is performed during data transfers. This technique of manipulating array dimensions during data transfer is the main technical contribution of this paper. These factors (as well as the improved communication library in our implementation) attribute to 24.3x speedup with respect to FFTW and 7x speedup with respect to Intel MKL for 4096 3D single-precision FFT on a 16-node cluster with 32 GPUs. Around 5x speedup with respect to both standard libraries are achieved for double precision.",
"title": ""
},
{
"docid": "e2df843bd6b491e904cc98f746c3314a",
"text": "Cryonic suspension is a relatively new technology that offers those who can afford it the chance to be 'frozen' for future revival when they reach the ends of their lives. This paper will examine the ethical status of this technology and whether its use can be justified. Among the arguments against using this technology are: it is 'against nature', and would change the very concept of death; no friends or family of the 'freezee' will be left alive when he is revived; the considerable expense involved for the freezee and the future society that will revive him; the environmental cost of maintaining suspension; those who wish to use cryonics might not live life to the full because they would economize in order to afford suspension; and cryonics could lead to premature euthanasia in order to maximize chances of success. Furthermore, science might not advance enough to ever permit revival, and reanimation might not take place due to socio-political or catastrophic reasons. Arguments advanced by proponents of cryonics include: the potential benefit to society; the ability to cheat death for at least a few more years; the prospect of immortality if revival is successful; and all the associated benefits that delaying or avoiding dying would bring. It emerges that it might be imprudent not to use the technology, given the relatively minor expense involved and the potential payoff. An adapted and more persuasive version of Pascal's Wager is presented and offered as a conclusive argument in favour of utilizing cryonic suspension.",
"title": ""
},
{
"docid": "e829a46ab8dd560f137b4c11c3626410",
"text": "Modeling dressed characters is known as a very tedious process. It u sually requires specifying 2D fabric patterns, positioning and assembling them in 3D, and then performing a physically-bas ed simulation. The latter accounts for gravity and collisions to compute the rest shape of the garment, with the ad equ te folds and wrinkles. This paper presents a more intuitive way to design virtual clothing. We start w ith a 2D sketching system in which the user draws the contours and seam-lines of the garment directly on a v irtu l mannequin. Our system then converts the sketch into an initial 3D surface using an existing method based on a p recomputed distance field around the mannequin. The system then splits the created surface into different pan els delimited by the seam-lines. The generated panels are typically not developable. However, the panels of a realistic garment must be developable, since each panel must unfold into a 2D sewing pattern. Therefore our sys tem automatically approximates each panel with a developable surface, while keeping them assembled along the s eams. This process allows us to output the corresponding sewing patterns. The last step of our method computes a natural rest shape for the 3D gar ment, including the folds due to the collisions with the body and gravity. The folds are generated using procedu ral modeling of the buckling phenomena observed in real fabric. The result of our algorithm consists of a realistic looking 3D mannequin dressed in the designed garment and the 2D patterns which can be used for distortion free texture mapping. The patterns we create also allow us to sew real replicas of the virtual garments.",
"title": ""
},
{
"docid": "966205d925e2c0840fcc9064fa450462",
"text": "Three diierent algorithms for obstacle detection are presented in this paper each based on diierent assumptions. The rst two algorithms are qualitative in that they return only yes/no answers regarding the presence of obstacles in the eld of view; no 3D reconstruction is performed. They have the advantage of fast determination of the existence of obstacles in a scene based on the solvability of a linear system. The rst algorithm uses information about the ground plane, while the second only assumes that the ground is planar. The third algorithm is quantitative in that it continuously estimates the ground plane and reconstructs partial 3D structures by determining the height above the ground plane of each point in the scene. Experimental results are presented for real and simulated data, and the performance of the three algorithms under diierent noise levels is compared in simulation. We conclude that in terms of the robustness of performance, the third algorithm is superior to the other two.",
"title": ""
},
{
"docid": "98796507d092548983120639417aa800",
"text": "Information retrieval studies that involve searching the Internet or marking phrases usually lack a well-defined number of negative cases. This prevents the use of traditional interrater reliability metrics like the kappa statistic to assess the quality of expert-generated gold standards. Such studies often quantify system performance as precision, recall, and F-measure, or as agreement. It can be shown that the average F-measure among pairs of experts is numerically identical to the average positive specific agreement among experts and that kappa approaches these measures as the number of negative cases grows large. Positive specific agreement-or the equivalent F-measure-may be an appropriate way to quantify interrater reliability and therefore to assess the reliability of a gold standard in these studies.",
"title": ""
},
{
"docid": "2e27078279131bf08b3f1cb060586599",
"text": "The QTW VTOL UAV, which features tandem tilt wings with propellers mounted at the mid-span of each wing, is one of the most promising UAV configurations, having both VTOL capability and high cruise performance. A six-degree-of-freedom dynamic simulation model covering the full range of the QTW flight envelope was developed and a flight control system including a transition schedule and a stability and control augmentation system (SCAS) was designed. The flight control system was installed in a small prototype QTW and a full transition flight test including vertical takeoff, accelerating transition, cruise, decelerating transition and hover landing was successfully accomplished.",
"title": ""
},
{
"docid": "1d29f224933954823228c25e5e99980e",
"text": "This study was carried out in a Turkish university with 216 undergraduate students of computer technology as respondents. The study aimed to develop a scale (UECUBS) to determine the unethical computer use behavior. A factor analysis of the related items revealed that the factors were can be divided under five headings; intellectual property, social impact, safety and quality, net integrity and information integrity. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
8a9a960688dfbd0bb9ac38020efe8bc4
|
Fingerprint Recognition Using Minutia Score Matching
|
[
{
"docid": "0a9debb7b20310f2f693b5c2b9a03576",
"text": "minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.",
"title": ""
}
] |
[
{
"docid": "fd392f5198794df04c70da6bc7fe2f0d",
"text": "Performance tuning in modern database systems requires a lot of expertise, is very time consuming and often misdirected. Tuning attempts often lack a methodology that has a holistic view of the database. The absence of historical diagnostic information to investigate performance issues at first occurrence exacerbates the whole tuning process often requiring that problems be reproduced before they can be correctly diagnosed. In this paper we describe how Oracle overcomes these challenges and provides a way to perform automatic performance diagnosis and tuning. We define a new measure called ‘Database Time’ that provides a common currency to gauge the performance impact of any resource or activity in the database. We explain how the Automatic Database Diagnostic Monitor (ADDM) automatically diagnoses the bottlenecks affecting the total database throughput and provides actionable recommendations to alleviate them. We also describe the types of performance measurements that are required to perform an ADDM analysis. Finally we show how ADDM plays a central role within Oracle 10g’s manageability framework to self-manage a database and provide a comprehensive tuning solution.",
"title": ""
},
{
"docid": "587ca964abb5708c896e2e4475116a6d",
"text": "The design and implementation of software for medical devices is challenging due to the closed-loop interaction with the patient, which is a stochastic physical environment. The safety-critical nature and the lack of existing industry standards for verification make this an ideal domain for exploring applications of formal modeling and closed-loop analysis. The biggest challenge is that the environment model(s) have to be both complex enough to express the physiological requirements and general enough to cover all possible inputs to the device. In this effort, we use a dual chamber implantable pacemaker as a case study to demonstrate verification of software specifications of medical devices as timed-automata models in UPPAAL. The pacemaker model is based on the specifications and algorithm descriptions from Boston Scientific. The heart is modeled using timed automata based on the physiology of heart. The model is gradually abstracted with timed simulation to preserve properties. A manual Counter-Example-Guided Abstraction and Refinement (CEGAR) framework has been adapted to refine the heart model when spurious counter-examples are found. To demonstrate the closed-loop nature of the problem and heart model refinement, we investigated two clinical cases of Pacemaker Mediated Tachycardia and verified their corresponding correction algorithms in the pacemaker. Along with our tools for code generation from UPPAAL models, this effort enables model-driven design and certification of software for medical devices.",
"title": ""
},
{
"docid": "a0d4d6c36cab8c5ed5be69bea1d8f302",
"text": "In this paper, we propose a simple, fast decoding algorithm that fosters diversity in neural generation. The algorithm modifies the standard beam search algorithm by adding an intersibling ranking penalty, favoring choosing hypotheses from diverse parents. We evaluate the proposed model on the tasks of dialogue response generation, abstractive summarization and machine translation. We find that diverse decoding helps across all tasks, especially those for which reranking is needed. We further propose a variation that is capable of automatically adjusting its diversity decoding rates for different inputs using reinforcement learning (RL). We observe a further performance boost from this RL technique.1",
"title": ""
},
{
"docid": "8afd1ab45198e9960e6a047091a2def8",
"text": "We study the response of complex networks subject to attacks on vertices and edges. Several existing complex network models as well as real-world networks of scientific collaborations and Internet traffic are numerically investigated, and the network performance is quantitatively measured by the average inverse geodesic length and the size of the largest connected subgraph. For each case of attacks on vertices and edges, four different attacking strategies are used: removals by the descending order of the degree and the betweenness centrality, calculated for either the initial network or the current network during the removal procedure. It is found that the removals by the recalculated degrees and betweenness centralities are often more harmful than the attack strategies based on the initial network, suggesting that the network structure changes as important vertices or edges are removed. Furthermore, the correlation between the betweenness centrality and the degree in complex networks is studied.",
"title": ""
},
{
"docid": "4ae4aa05befe374ab4e06d1c002efb53",
"text": "The convincing development in Internet of Things (IoT) enables the solutions to spur the advent of novel and fascinating applications. The main aim is to integrate IoT aware architecture to enhance smart healthcare systems for automatic environmental monitoring of hospital and patient health. Staying true to the IoT vision, we propose a smart hospital system (SHS), which relies on different, yet complimentary, technologies, specifically RFID, WSN and smart mobile, interoperating with each other through a Constrained Application Protocol (CoAP)/IPv6 over low-power wireless personal area network (6LoWPAN)/representational state transfer (REST) network infrastructure. RADIO frequency identification technologies have been increasingly used in various applications, such as inventory control, and object tracking. An RFID system typically consist of one or several readers and numerous tags. Each tag has a unique ID. The proposed SHS has highlighted a number of key capabilities and aspects of novelty, which represent a significant step forward.",
"title": ""
},
{
"docid": "91cb2ee27517441704bf739ee811d6c6",
"text": "The primo vascular system has a specific anatomical and immunohistochemical signature that sets it apart from the arteriovenous and lymphatic systems. With immune and endocrine functions, the primo vascular system has been found to play a large role in biological processes, including tissue regeneration, inflammation, and cancer metastases. Although scientifically confirmed in 2002, the original discovery was made in the early 1960s by Bong-Han Kim, a North Korean scientist. It would take nearly 40 years after that discovery for scientists to revisit Kim's research to confirm the early findings. The presence of primo vessels in and around blood and lymph vessels, nerves, viscera, and fascia, as well as in the brain and spinal cord, reveals a common link that could potentially open novel possibilities of integration with cranial, lymphatic, visceral, and fascial approaches in manual medicine.",
"title": ""
},
{
"docid": "779d5380c72827043111d00510e32bfd",
"text": "OBJECTIVE\nThe purpose of this review is 2-fold. The first is to provide a review for physiatrists already providing care for women with musculoskeletal pelvic floor pain and a resource for physiatrists who are interested in expanding their practice to include this patient population. The second is to describe how musculoskeletal dysfunctions involving the pelvic floor can be approached by the physiatrist using the same principles used to evaluate and treat others dysfunctions in the musculoskeletal system. This discussion clarifies that evaluation and treatment of pelvic floor pain of musculoskeletal origin is within the scope of practice for physiatrists. The authors review the anatomy of the pelvic floor, including the bony pelvis and joints, muscle and fascia, and the peripheral and autonomic nervous systems. Pertinent history and physical examination findings are described. The review concludes with a discussion of differential diagnosis and treatment of musculoskeletal pelvic floor pain in women. Improved recognition of pelvic floor dysfunction by healthcare providers will reduce impairment and disability for women with pelvic floor pain. A physiatrist is in the unique position to treat the musculoskeletal causes of this condition because it requires an expert grasp of anatomy, function, and the linked relationship between the spine and pelvis. Further research regarding musculoskeletal causes and treatment of pelvic floor pain will help validate these concepts and improve awareness and care for women limited by this condition.",
"title": ""
},
{
"docid": "b84971bc1f2d2ebf43815d33cea86c8c",
"text": "The container-inhabiting mosquito simulation model (CIMSiM) is a weather-driven, dynamic life table simulation model of Aedes aegypti (L.) and similar nondiapausing Aedes mosquitoes that inhabit artificial and natural containers. This paper presents a validation of CIMSiM simulating Ae. aegypti using several independent series of data that were not used in model development. Validation data sets include laboratory work designed to elucidate the role of diet on fecundity and rates of larval development and survival. Comparisons are made with four field studies conducted in Bangkok, Thailand, on seasonal changes in population dynamics and with a field study in New Orleans, LA, on larval habitat. Finally, predicted ovipositional activity of Ae. aegypti in seven cities in the southeastern United States for the period 1981-1985 is compared with a data set developed by the U.S. Public Health Service. On the basis of these comparisons, we believe that, for stated design goals, CIMSiM adequately simulates the population dynamics of Ae. aegypti in response to specific information on weather and immature habitat. We anticipate that it will be useful in simulation studies concerning the development and optimization of control strategies and that, with further field validation, can provide entomological inputs for a dengue virus transmission model.",
"title": ""
},
{
"docid": "c06c067294cbb7bbc129324591d2636c",
"text": "In this article, we propose a new method for localizing optic disc in retinal images. Localizing the optic disc and its center is the first step of most vessel segmentation, disease diagnostic, and retinal recognition algorithms. We use optic disc of the first four retinal images in DRIVE dataset to extract the histograms of each color component. Then, we calculate the average of histograms for each color as template for localizing the center of optic disc. The DRIVE, STARE, and a local dataset including 273 retinal images are used to evaluate the proposed algorithm. The success rate was 100, 91.36, and 98.9%, respectively.",
"title": ""
},
{
"docid": "173811394fd49c15b151fc9059acbe13",
"text": "The 'jewel in the crown' from the MIT90s [Management in the 90s] program is undoubtedly the Strategic Alignment Model (SAM) of Henderson and Venkatraman.",
"title": ""
},
{
"docid": "61615273dad80e5a0a95ecbe3002fd72",
"text": "Other than serving as building blocks for DNA and RNA, purine metabolites provide a cell with the necessary energy and cofactors to promote cell survival and proliferation. A renewed interest in how purine metabolism may fuel cancer progression has uncovered a new perspective into how a cell regulates purine need. Under cellular conditions of high purine demand, the de novo purine biosynthetic enzymes cluster near mitochondria and microtubules to form dynamic multienzyme complexes referred to as 'purinosomes'. In this review, we highlight the purinosome as a novel level of metabolic organization of enzymes in cells, its consequences for regulation of purine metabolism, and the extent that purine metabolism is being targeted for the treatment of cancers.",
"title": ""
},
{
"docid": "c3ee2beee84cd32e543c4b634062eeac",
"text": "In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.",
"title": ""
},
{
"docid": "0d4deabaaf6f78b16c4880e6179a76d8",
"text": "Alcohol drinking has been associated with increased blood pressure in epidemiological studies. We conducted a meta-analysis of randomized controlled trials to assess the effects of alcohol reduction on blood pressure. We included 15 randomized control trials (total of 2234 participants) published before June 1999 in which alcohol reduction was the only intervention difference between active and control treatment groups. Using a standard protocol, information on sample size, participant characteristics, study design, intervention methods, duration, and treatment results was abstracted independently by 3 investigators. By means of a fixed-effects model, findings from individual trials were pooled after results for each trial were weighted by the inverse of its variance. Overall, alcohol reduction was associated with a significant reduction in mean (95% confidence interval) systolic and diastolic blood pressures of -3.31 mm Hg (-2.52 to -4.10 mm Hg) and -2.04 mm Hg (-1.49 to -2.58 mm Hg), respectively. A dose-response relationship was observed between mean percentage of alcohol reduction and mean blood pressure reduction. Effects of intervention were enhanced in those with higher baseline blood pressure. Our study suggests that alcohol reduction should be recommended as an important component of lifestyle modification for the prevention and treatment of hypertension among heavy drinkers.",
"title": ""
},
{
"docid": "2dfb8e3f50c1968b441872fa4aa13fec",
"text": "An ultra-wideband Vivaldi antenna with dual-polarization capability is presented. A two-section quarter-wave balun feedline is developed to feed the tapered slot antenna, which improves the impedance matching performance especially in the low frequency regions. The dual-polarization is realized by orthogonally combining two identical Vivaldi antennas without a galvanic contact. Measured results have been presented with a fractional bandwidth of 172% from 0.56 GHz to 7.36 GHz for S11 < −10 dB and a good port isolation of S21 < −22 dB. The measured antenna gain of up to 9.4 dBi and cross-polarization discrimination (XPD) of more than 18 dB is achieved, making the antenna suitable for mobile communication testing in chambers or open-site facilities.",
"title": ""
},
{
"docid": "d35082d022280d25eea3e98596b70839",
"text": "OVERVIEW 795 DEFINING PROPERTIES OF THE BIOECOLOGICAL MODEL 796 Proposition I 797 Proposition II 798 FROM THEORY TO RESEARCH DESIGN: OPERATIONALIZING THE BIOECOLOGICAL MODEL 799 Developmental Science in the Discovery Mode 801 Different Paths to Different Outcomes: Dysfunction versus Competence 803 The Role of Experiments in the Bioecological Model 808 HOW DO PERSON CHARACTERISTICS INFLUENCE LATER DEVELOPMENT? 810 Force Characteristics as Shapers of Development 810 Resource Characteristics of the Person as Shapers of Development 812 Demand Characteristics of the Person as Developmental Inf luences 812 THE ROLE OF FOCUS OF ATTENTION IN PROXIMAL PROCESSES 813 PROXIMAL PROCESSES IN SOLO ACTIVITIES WITH OBJECTS AND SYMBOLS 814 THE MICROSYSTEM MAGNIFIED: ACTIVITIES, RELATIONSHIPS, AND ROLES 814 Effects of the Physical Environment on Psychological Development 814 The Mother-Infant Dyad as a Context of Development 815 BEYOND THE MICROSYSTEM 817 The Expanding Ecological Universe 818 Nature-Nurture Reconceptualized: A Bioecological Interpretation 819 TIME IN THE BIOECOLOGICAL MODEL: MICRO-, MESO-, AND MACROCHRONOLOGICAL SYSTEMS 820 FROM RESEARCH TO REALITY 822 THE BIOECOLOGICAL MODEL: A DEVELOPMENTAL ASSESSMENT 824 REFERENCES 825",
"title": ""
},
{
"docid": "14f127a8dd4a0fab5acd9db2a3924657",
"text": "Pesticides (herbicides, fungicides or insecticides) play an important role in agriculture to control the pests and increase the productivity to meet the demand of foods by a remarkably growing population. Pesticides application thus became one of the important inputs for the high production of corn and wheat in USA and UK, respectively. It also increased the crop production in China and India [1-4]. Although extensive use of pesticides improved in securing enough crop production worldwide however; these pesticides are equally toxic or harmful to nontarget organisms like mammals, birds etc and thus their presence in excess can cause serious health and environmental problems. Pesticides have thus become environmental pollutants as they are often found in soil, water, atmosphere and agricultural products, in harmful levels, posing an environmental threat. Its residual presence in agricultural products and foods can also exhibit acute or chronic toxicity on human health. Even at low levels, it can cause adverse effects on humans, plants, animals and ecosystems. Thus, monitoring of these pesticide and its residues become extremely important to ensure that agricultural products have permitted levels of pesticides [5-6]. Majority of pesticides belong to four classes, namely organochlorines, organophosphates, carbamates and pyrethroids. Organophosphates pesticides are a class of insecticides, of which many are highly toxic [7]. Until the 21st century, they were among the most widely used insecticides which included parathion, malathion, methyl parathion, chlorpyrifos, diazinon, dichlorvos, dimethoate, monocrotophos and profenofos. Organophosphate pesticides cause toxicity by inhibiting acetylcholinesterase enzyme [8]. It acts as a poison to insects and other animals, such as birds, amphibians and mammals, primarily by phosphorylating the acetylcholinesterase enzyme (AChE) present at nerve endings. This leads to the loss of available AChE and because of the excess acetylcholine (ACh, the impulse-transmitting substance), the effected organ becomes over stimulated. The enzyme is critical to control the transmission of nerve impulse from nerve fibers to the smooth and skeletal muscle cells, secretary cells and autonomic ganglia, and within the central nervous system (CNS). Once the enzyme reaches a critical level due to inactivation by phosphorylation, symptoms and signs of cholinergic poisoning get manifested [9].",
"title": ""
},
{
"docid": "1e2768be2148ff1fd102c6621e8da14d",
"text": "Example-based learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with a small number of training images of each expression is considered. A new technique based on linear programming for both feature selection and classifier training is introduced. A pairwise framework for feature selection, instead of using all classes simultaneously, is presented. Experimental results compare the method with three others: a simplified Bayes classifier, support vector machine, and AdaBoost. Finally, each algorithm is analyzed and a new categorization of these algorithms is given, especially for learning from examples in the small sample case.",
"title": ""
},
{
"docid": "25b77292def9ba880fecb58a38897400",
"text": "In this paper, we present a successful operation of Gallium Nitride(GaN)-based three-phase inverter with high efficiency of 99.3% for driving motor at 900W under the carrier frequency of 6kHz. This efficiency well exceeds the value by IGBT (Insulated Gate Bipolar Transistor). This demonstrates that GaN has a great potential for power switching application competing with SiC. Fully reduced on-state resistance in a new normally-off GaN transistor called Gate Injection Transistor (GIT) greatly helps to increase the efficiency. In addition, use of the bidirectional operation of the lateral and compact GITs with synchronous gate driving, the inverter is operated free from fly-wheel diodes which have been connected in parallel with IGBTs in a conventional inverter system.",
"title": ""
},
{
"docid": "394854761e27aa7baa6fa2eea60f347d",
"text": "Our goal is to complement an entity ranking with human-readable explanations of how those retrieved entities are connected to the information need. Relation extraction technology should aid in finding such support passages, especially in combination with entities and query terms. This work explores how the current state of the art in unsupervised relation extraction (OpenIE) contributes to a solution for the task, assessing potential, limitations, and avenues for further investigation.",
"title": ""
},
{
"docid": "daf751e821c730db906c40ccf4678a90",
"text": "Data provided by Internet of Things (IoT) are time series and have some specific characteristics that must be considered with regard to storage and management. IoT data is very likely to be stored in NoSQL system databases where there are some particular engine and compaction strategies to manage time series data. In this article, two of these strategies found in the open source Cassandra database system are described, analyzed and compared. The configuration of these strategies is not trivial and may be very time consuming. To provide indicators, the strategy with the best time performance had its main parameter tested along 14 different values and results are shown, related to both response time and storage space needed. The results may help users to configure their IoT NoSQL databases in an efficient setup, may help designers to improve database compaction strategies or encourage the community to set new default values for the compaction strategies.",
"title": ""
}
] |
scidocsrr
|
80fdd5b3d91cfc2c6e561cdf529eabb5
|
Artificial Roughness Encoding with a Bio-inspired MEMS- based Tactile Sensor Array
|
[
{
"docid": "f3ee129af2a833f8775c5366c188d71c",
"text": "Strong motivation for developing new prosthetic hand devices is provided by the fact that low functionality and controllability—in addition to poor cosmetic appearance—are the most important reasons why amputees do not regularly use their prosthetic hands. This paper presents the design of the CyberHand, a cybernetic anthropomorphic hand intended to provide amputees with functional hand replacement. Its design was bio-inspired in terms of its modular architecture, its physical appearance, kinematics, sensorization, and actuation, and its multilevel control system. Its underactuated mechanisms allow separate control of each digit as well as thumb–finger opposition and, accordingly, can generate a multitude of grasps. Its sensory system was designed to provide proprioceptive information as well as to emulate fundamental functional properties of human tactile mechanoreceptors of specific importance for grasp-and-hold tasks. The CyberHand control system presumes just a few efferent and afferent channels and was divided in two main layers: a high-level control that interprets the user’s intention (grasp selection and required force level) and can provide pertinent sensory feedback and a low-level control responsible for actuating specific grasps and applying the desired total force by taking advantage of the intelligent mechanics. The grasps made available by the high-level controller include those fundamental for activities of daily living: cylindrical, spherical, tridigital (tripod), and lateral grasps. The modular and flexible design of the CyberHand makes it suitable for incremental development of sensorization, interfacing, and control strategies and, as such, it will be a useful tool not only for clinical research but also for addressing neuroscientific hypotheses regarding sensorimotor control.",
"title": ""
}
] |
[
{
"docid": "afffadc35ac735d11e1a415c93d1c39f",
"text": "We examine self-control problems — modeled as time-inconsistent, presentbiased preferences—in a model where a person must do an activity exactly once. We emphasize two distinctions: Do activities involve immediate costs or immediate rewards, and are people sophisticated or naive about future self-control problems? Naive people procrastinate immediate-cost activities and preproperate—do too soon—immediate-reward activities. Sophistication mitigates procrastination, but exacerbates preproperation. Moreover, with immediate costs, a small present bias can severely harm only naive people, whereas with immediate rewards it can severely harm only sophisticated people. Lessons for savings, addiction, and elsewhere are discussed. (JEL A12, B49, C70, D11, D60, D74, D91, E21)",
"title": ""
},
{
"docid": "ba0fab446ba760a4cb18405a05cf3979",
"text": "Please c Disaster Summary. — This study aims at understanding the role of education in promoting disaster preparedness. Strengthening resilience to climate-related hazards is an urgent target of Goal 13 of the Sustainable Development Goals. Preparing for a disaster such as stockpiling of emergency supplies or having a family evacuation plan can substantially minimize loss and damages from natural hazards. However, the levels of household disaster preparedness are often low even in disaster-prone areas. Focusing on determinants of personal disaster preparedness, this paper investigates: (1) pathways through which education enhances preparedness; and (2) the interplay between education and experience in shaping preparedness actions. Data analysis is based on face-to-face surveys of adults aged 15 years in Thailand (N = 1,310) and the Philippines (N = 889, female only). Controlling for socio-demographic and contextual characteristics, we find that formal education raises the propensity to prepare against disasters. Using the KHB method to further decompose the education effects, we find that the effect of education on disaster preparedness is mainly mediated through social capital and disaster risk perception in Thailand whereas there is no evidence that education is mediated through observable channels in the Philippines. This suggests that the underlying mechanisms explaining the education effects are highly context-specific. Controlling for the interplay between education and disaster experience, we show that education raises disaster preparedness only for those households that have not been affected by a disaster in the past. Education improves abstract reasoning and anticipation skills such that the better educated undertake preventive measures without needing to first experience the harmful event and then learn later. In line with recent efforts of various UN agencies in promoting education for sustainable development, this study provides a solid empirical evidence showing positive externalities of education in disaster risk reduction. 2017TheAuthors.PublishedbyElsevierLtd.This is an open access article under theCCBY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "07a42e7b4c5bc8088e9ff9b57c46f5fb",
"text": "In this paper, the concept of divergent component of motion (DCM, also called “Capture Point”) is extended to 3-D. We introduce the “Enhanced Centroidal Moment Pivot point” (eCMP) and the “Virtual Repellent Point” (VRP), which allow for the encoding of both direction and magnitude of the external forces and the total force (i.e., external plus gravitational forces) acting on the robot. Based on eCMP, VRP, and DCM, we present methods for real-time planning and tracking control of DCM trajectories in 3-D. The basic DCM trajectory generator is extended to produce continuous leg force profiles and to facilitate the use of toe-off motion during double support. The robustness of the proposed control framework is thoroughly examined, and its capabilities are verified both in simulations and experiments.",
"title": ""
},
{
"docid": "4523c880e099da9bbade4870da04f0c4",
"text": "Despite the hype about blockchains and distributed ledgers, formal abstractions of these objects are scarce1. To face this issue, in this paper we provide a proper formulation of a distributed ledger object. In brief, we de ne a ledger object as a sequence of records, and we provide the operations and the properties that such an object should support. Implemen- tation of a ledger object on top of multiple (possibly geographically dispersed) computing devices gives rise to the distributed ledger object. In contrast to the centralized object, dis- tribution allows operations to be applied concurrently on the ledger, introducing challenges on the consistency of the ledger in each participant. We provide the de nitions of three well known consistency guarantees in terms of the operations supported by the ledger object: (1) atomic consistency (linearizability), (2) sequential consistency, and (3) eventual consistency. We then provide implementations of distributed ledgers on asynchronous message passing crash- prone systems using an Atomic Broadcast service, and show that they provide eventual, sequen- tial or atomic consistency semantics respectively. We conclude with a variation of the ledger the validated ledger which requires that each record in the ledger satis es a particular validation rule.",
"title": ""
},
{
"docid": "7e91815398915670fadba3c60e772d14",
"text": "Online reviews are valuable resources not only for consumers to make decisions before purchase, but also for providers to get feedbacks for their services or commodities. In Aspect Based Sentiment Analysis (ABSA), it is critical to identify aspect categories and extract aspect terms from the sentences of user-generated reviews. However, the two tasks are often treated independently, even though they are closely related. Intuitively, the learned knowledge of one task should inform the other learning task. In this paper, we propose a multi-task learning model based on neural networks to solve them together. We demonstrate the improved performance of our multi-task learning model over the models trained separately on three public dataset released by SemEval work-",
"title": ""
},
{
"docid": "30740e33cdb2c274dbd4423e8f56405e",
"text": "A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.",
"title": ""
},
{
"docid": "af4db4d9be3f652445a47e2985070287",
"text": "BACKGROUND\nSurgical Site Infections (SSIs) are infections of incision or deep tissue at operation sites. These infections prolong hospitalization, delay wound healing, and increase the overall cost and morbidity.\n\n\nOBJECTIVES\nThis study aimed to investigate anaerobic and aerobic bacteria prevalence in surgical site infections and determinate antibiotic susceptibility pattern in these isolates.\n\n\nMATERIALS AND METHODS\nOne hundred SSIs specimens were obtained by needle aspiration from purulent material in depth of infected site. These specimens were cultured and incubated in both aerobic and anaerobic condition. For detection of antibiotic susceptibility pattern in aerobic and anaerobic bacteria, we used disk diffusion, agar dilution, and E-test methods.\n\n\nRESULTS\nA total of 194 bacterial strains were isolated from 100 samples of surgical sites. Predominant aerobic and facultative anaerobic bacteria isolated from these specimens were the members of Enterobacteriaceae family (66, 34.03%) followed by Pseudomonas aeruginosa (26, 13.4%), Staphylococcus aureus (24, 12.37%), Acinetobacter spp. (18, 9.28%), Enterococcus spp. (16, 8.24%), coagulase negative Staphylococcus spp. (14, 7.22%) and nonhemolytic streptococci (2, 1.03%). Bacteroides fragilis (26, 13.4%), and Clostridium perfringens (2, 1.03%) were isolated as anaerobic bacteria. The most resistant bacteria among anaerobic isolates were B. fragilis. All Gram-positive isolates were susceptible to vancomycin and linezolid while most of Enterobacteriaceae showed sensitivity to imipenem.\n\n\nCONCLUSIONS\nMost SSIs specimens were polymicrobial and predominant anaerobic isolate was B. fragilis. Isolated aerobic and anaerobic strains showed high level of resistance to antibiotics.",
"title": ""
},
{
"docid": "044de981e34f0180accfb799063a7ec1",
"text": "This paper proposes a novel hybrid full-bridge three-level LLC resonant converter. It integrates the advantages of the hybrid full-bridge three-level converter and the LLC resonant converter. It can operate not only under three-level mode but also under two-level mode, so it is very suitable for wide input voltage range application, such as fuel cell power system. The input current ripple and output filter can also be reduced. Three-level leg switches just sustain only half of the input voltage. ZCS is achieved for the rectifier diodes, and the voltage stress across the rectifier diodes can be minimized to the output voltage. The main switches can realize ZVS from zero to full load. A 200-400 V input, 360 V/4 A output prototype converter is built in our lab to verify the operation principle of the proposed converter",
"title": ""
},
{
"docid": "427ebc0500e91e842873c4690cdacf79",
"text": "Bounding volume hierarchy (BVH) has been widely adopted as the acceleration structure in broad-phase collision detection. Previous state-of-the-art BVH-based collision detection approaches exploited the spatio-temporal coherence of simulations by maintaining a bounding volume test tree (BVTT) front. A major drawback of these algorithms is that large deformations in the scenes decrease culling efficiency and slow down collision queries. Moreover, for front-based methods, the inefficient caching on GPU caused by the arbitrary layout of BVH and BVTT front nodes becomes a critical performance issue. We present a fast and robust BVH-based collision detection scheme on GPU that addresses the above problems by ordering and restructuring BVHs and BVTT fronts. Our techniques are based on the use of histogram sort and an auxiliary structure BVTT front log, through which we analyze the dynamic status of BVTT front and BVH quality. Our approach efficiently handles interand intra-object collisions and performs especially well in simulations where there is considerable spatio-temporal coherence. The benchmark results demonstrate that our approach is significantly faster than the previous BVH-based method, and also outperforms other state-of-the-art spatial subdivision schemes in terms of speed. CCS Concepts •Computing methodologies → Collision detection; Physical simulation;",
"title": ""
},
{
"docid": "c447e34a5048c7fe2d731aaa77b87dd3",
"text": "Bullying, in both physical and cyber worlds, has been recognized as a serious health issue among adolescents. Given its significance, scholars are charged with identifying factors that influence bullying involvement in a timely fashion. However, previous social studies of bullying are handicapped by data scarcity. The standard psychological science approach to studying bullying is to conduct personal surveys in schools. The sample size is typically in the hundreds, and these surveys are often collected only once. On the other hand, the few computational studies narrowly restrict themselves to cyberbullying, which accounts for only a small fraction of all bullying episodes.",
"title": ""
},
{
"docid": "0eec3e2c266f6c8dd39b38320a4e70fa",
"text": "The development of Urdu Nastalique O Character Recognition (OCR) is a challenging task due to the cursive nature of Urdu, complexities of Nastalique writing style and layouts of Urdu document images. In this paper, the framework of Urdu Nastalique OCR is presented. The presented system supports the recognition of Urdu Nastalique document images having font size between 14 to 44. has 86.15% ligature recognition accuracy tested on 224 document images.",
"title": ""
},
{
"docid": "c2fc4e65c484486f5612f4006b6df102",
"text": "Although flat item category structure where categories are independent in a same level has been well studied to enhance recommendation performance, in many real applications, item category is often organized in hierarchies to reflect the inherent correlations among categories. In this paper, we propose a novel matrix factorization model by exploiting category hierarchy from the perspectives of users and items for effective recommendation. Specifically, a user (an item) can be influenced (characterized) by her preferred categories (the categories it belongs to) in the hierarchy. We incorporate how different categories in the hierarchy co-influence a user and an item. Empirical results show the superiority of our approach against other counterparts.",
"title": ""
},
{
"docid": "924eb275a1205dbf7907a58fc1cee5b6",
"text": "BACKGROUND\nNutrient status of B vitamins, particularly folate and vitamin B-12, may be related to cognitive ageing but epidemiological evidence remains inconclusive.\n\n\nOBJECTIVE\nThe aim of this study was to estimate the association of serum folate and vitamin B-12 concentrations with cognitive function in middle-aged and older adults from three Central and Eastern European populations.\n\n\nMETHODS\nMen and women aged 45-69 at baseline participating in the Health, Alcohol and Psychosocial factors in Eastern Europe (HAPIEE) study were recruited in Krakow (Poland), Kaunas (Lithuania) and six urban centres in the Czech Republic. Tests of immediate and delayed recall, verbal fluency and letter search were administered at baseline and repeated in 2006-2008. Serum concentrations of biomarkers at baseline were measured in a sub-sample of participants. Associations of vitamin quartiles with baseline (n=4166) and follow-up (n=2739) cognitive domain-specific z-scores were estimated using multiple linear regression.\n\n\nRESULTS\nAfter adjusting for confounders, folate was positively associated with letter search and vitamin B-12 with word recall in cross-sectional analyses. In prospective analyses, participants in the highest quartile of folate had higher verbal fluency (p<0.01) and immediate recall (p<0.05) scores compared to those in the bottom quartile. In addition, participants in the highest quartile of vitamin B-12 had significantly higher verbal fluency scores (β=0.12; 95% CI=0.02, 0.21).\n\n\nCONCLUSIONS\nFolate and vitamin B-12 were positively associated with performance in some but not all cognitive domains in older Central and Eastern Europeans. These findings do not lend unequivocal support to potential importance of folate and vitamin B-12 status for cognitive function in older age. Long-term longitudinal studies and randomised trials are required before drawing conclusions on the role of these vitamins in cognitive decline.",
"title": ""
},
{
"docid": "34a46b80f025cd8cd25243a777b4ff6a",
"text": "This research attempts to investigate the effects of blog marketing on brand attitude and purchase intention. The elements of blog marketing are identified as community identification, interpersonal trust, message exchange, and two-way communication. The relationships among variables are pictured on the fundamental research framework provided by this study. Data were collected via an online questionnaire and 727 useable samples were collected and analyzed utilizing AMOS 5.0. The empirical findings show that the blog marketing elements can impact on brand attitude positively except for the element of community identification. Further, the analysis result also verifies the moderating effects on the relationship between blog marketing elements and brand attitude.",
"title": ""
},
{
"docid": "f1cb1df8ad0b78f0f47b2cfcf2e9c5b6",
"text": "Quantitative performance analysis in sports has become mainstream in the last decade. The focus of the analyses is shifting towards more sport-speci ic metrics due to novel technologies. These systems measure the movements of the players and the events happening during trainings and games. This allows for a more detailed evaluation of professional athletes with implications on areas such as opponent scouting, planning of training sessions, or player scouting. Previousworks that analyze soccer-related logs focus on the game-relatedperformanceof theplayers and teams. Vast majority of these methodologies concentrate on descriptive statistics that capture some part of the players’ strategy. For example, in case of soccer, the average number of shots, goals, fouls, passes are derived both for the teams and for the players [1, 5]. Other works identify and analyze the outcome of the strategies that teams apply [18, 16, 13, 11, 9, 24, 14]. However, the physical performance and in particular the movements of players has not received detailed attention yet. It is challenging to get access to datasets related to the physical performance of soccer players. The teams consider such information highly con idential, especially if it covers in-game performance. Despite the fact that numerous teams deployed player tracking systems in their stadiums, datasets of this nature are not available for research or for public usage. It is nearly impossible to havequantitative information on the physical performance of all the teams of a competition. Hence, most of the analysis and evaluation of the players’ performance do not contain much information on the physical aspect of the game, creating a blindspot in performance analysis. We propose a novelmethod to solve this issue by derivingmovement characteristics of soccer players. We use event-based datasets from data provider companies covering 50+ soccer leagues allowing us to analyze the movement pro iles of potentially tens of thousands of players without any major investment. Our methodology does not require expensive, dedicated player tracking system deployed in the stadium. Instead, if the game is broadcasted, our methodology can be used. As a consequence, our technique does not require the consent of the involved teams yet it can provide insights on the physical performance of many players in different teams. The main contribution of our work is threefold:",
"title": ""
},
{
"docid": "5eb526843c41d2549862b60c17110b5b",
"text": "■ Abstract We explore the social dimension that enables adaptive ecosystem-based management. The review concentrates on experiences of adaptive governance of socialecological systems during periods of abrupt change (crisis) and investigates social sources of renewal and reorganization. Such governance connects individuals, organizations, agencies, and institutions at multiple organizational levels. Key persons provide leadership, trust, vision, meaning, and they help transform management organizations toward a learning environment. Adaptive governance systems often self-organize as social networks with teams and actor groups that draw on various knowledge systems and experiences for the development of a common understanding and policies. The emergence of “bridging organizations” seem to lower the costs of collaboration and conflict resolution, and enabling legislation and governmental policies can support self-organization while framing creativity for adaptive comanagement efforts. A resilient social-ecological system may make use of crisis as an opportunity to transform into a more desired state.",
"title": ""
},
{
"docid": "7fa8d82b55c5ae2879123380ef1a8505",
"text": "In the general context of Knowledge Discovery, speciic techniques , called Text Mining techniques, are necessary to extract information from unstructured textual data. The extracted information can then be used for the classiication of the content of large textual bases. In this paper, we present two examples of information that can be automatically extracted from text collections: probabilistic associations of keywords and prototypical document instances. The Natural Language Processing (NLP) tools necessary for such extractions are also presented.",
"title": ""
},
{
"docid": "3038334926608dbe4cdb091cf0e955eb",
"text": "Cloud computing has undergone rapid expansion throughout the last decade. Many companies and organizations have made the transition from tra ditional data centers to the cloud due to its flexibility and lower cost. However, traditional data centers are still being relied upon by those who are less certain about the security of cloud. This problem is highlighted by the fact that there only exist limited efforts on threat modeling for cloud data centers. In this paper, we conduct comprehensive threat modeling exercises based on two representative cloud infrastructures using several popular threat modeling methods, including attack surface, attack trees, attack graphs, and security metrics based on attack trees and attack graphs, respectively. Those threat modeling efforts provide cloud providers practical lessons and means toward better evaluating, understanding, and improving their cloud infrastructures. Our results may also imbed more con fidence in potential cloud tenants by providing them a clearer picture about po tential threats in cloud infrastructures and corresponding solutions.",
"title": ""
},
{
"docid": "9f04ac4067179aadf5e429492c7625e9",
"text": "We provide a model that links an asset’s market liquidity — i.e., the ease with which it is traded — and traders’ funding liquidity — i.e., the ease with which they can obtain funding. Traders provide market liquidity, and their ability to do so depends on their availability of funding. Conversely, traders’ funding, i.e., their capital and the margins they are charged, depend on the assets’ market liquidity. We show that, under certain conditions, margins are destabilizing and market liquidity and funding liquidity are mutually reinforcing, leading to liquidity spirals. The model explains the empirically documented features that market liquidity (i) can suddenly dry up, (ii) has commonality across securities, (iii) is related to volatility, (iv) is subject to “flight to quality”, and (v) comoves with the market, and it provides new testable predictions.",
"title": ""
},
{
"docid": "fa20d7bf8a6e99691a42dcd756ed1cc6",
"text": "IoT (Internet of Things) is acommunication network that connects physical or things to each other or with a group all together. The use is widely popular nowadays and its usage has expanded into interesting subjects. Especially, it is getting more popular to research in cross subjects such as mixing smart systems with computer sciences and engineering applications together. Object detection is one of these subjects. Realtime object detection is one of the foremost interesting subjects because of its compute costs. Gaps in methodology, unknown concepts and insufficiency in mathematical modeling makes it harder for designing these computing algorithms. Algortihms in these applications can be developed with in machine learning and/or numerical methods that are available in scientific literature. These operations are possible only if communication of objects within theirselves in physical space and awareness of the objects nearby. Artificial Neural Networks may help in these studies. In this study, yolo algorithm which is seen as a key element for real-time object detection in IoT is researched. It is realized and shown in results that optimization of computing and analyzation of system aside this research which takes Yolo algorithm as a foundation point [10]. As a result, it is seen that our model approach has an interesting potential and novelty.",
"title": ""
}
] |
scidocsrr
|
c2ee1f1e8bc5b50cdb12761b88029339
|
Business Process Analytics
|
[
{
"docid": "4ca4ccd53064c7a9189fef3e801612a0",
"text": "workflows, data warehousing, business intelligence Process design and automation technologies are being increasingly used by both traditional and newly-formed, Internet-based enterprises in order to improve the quality and efficiency of their administrative and production processes, to manage e-commerce transactions, and to rapidly and reliably deliver services to businesses and individual customers.",
"title": ""
}
] |
[
{
"docid": "1381104da316d0e1b66fce7f3b51a153",
"text": "Automatic segmentation and quantification of skeletal structures has a variety of applications for biological research. Although solutions for good quality X-ray images of human skeletal structures are in existence in recent years, automatic solutions working on poor quality X-ray images of mice are rare. This paper proposes a fully automatic solution for spine segmentation and curvature quantification from X-ray images of mice. The proposed solution consists of three stages, namely preparation of the region of interest, spine segmentation, and spine curvature quantification, aiming to overcome technical difficulties in processing the X-ray images. We examined six different automatic measurements for quantifying the spine curvature through tests on a sample data set of 100 images. The experimental results show that some of the automatic measures are very close to and consistent with the best manual measurement results by annotators. The test results also demonstrate the effectiveness of the curvature quantification produced by the proposed solution in distinguishing abnormally shaped spines from the normal ones with accuracy up to 98.6%.",
"title": ""
},
{
"docid": "e9db97070b87e567ff7904fe40f30086",
"text": "OBJECTIVES\nCongenital adrenal hyperplasia (CAH) is a disease that occurs during fetal development and can lead to virilization in females or death in newborn males if not discovered early in life. Because of this there is a need to seek morphological markers in order to help diagnose the disease. In order to test the hypothesis that prenatal hormones can affect the sexual dimorphic pattern 2D:4D digit ratio in individual with CAH, the aim of this study was to compare the digit ratio in female and male patients with CAH and control subjects.\n\n\nMETHODS\nThe 2D:4D ratios in both hands of 40 patients (31 females-46, XX, and 9 males-46, XY) were compared with the measures of control individuals without CAH (100 males and 100 females).\n\n\nRESULTS\nFemales with CAH showed 2D:4D ratios typical of male controls (0.950 and 0.947) in both hands (P < 0.001). In CAH males the left hand 2D:4D ratio (0.983) was statistically different from that of male controls (P < 0.05).\n\n\nCONCLUSIONS\nThese finding support the idea that sexual dimorphism in skeletal development in early fetal life is associated with differences between the exposure to androgens in males and females, and significant differences associated with adrenal hyperplasia. Although the effects of prenatal androgens on skeletal developmental are supported by numerous studies, further investigation is yet required to clarify the disease and establish the digit ratio as a biomarker for CAH.",
"title": ""
},
{
"docid": "1420f07e309c114dfc264797ab82ceec",
"text": "Introduction: The knowledge of clinical spectrum and epidemiological profile of critically ill children plays a significant role in the planning of health policies that would mitigate various factors related to the evolution of diseases prevalent in these sectors. The data collected enable prospective comparisons to be made with benchmark standards including regional and international units for the continuous pursuit of providing essential health care and improving the quality of patient care. Purpose: To study the clinical spectrum and epidemiological profile of the critically ill children admitted to the pediatric intensive care unit at a tertiary care center in South India. Materials and Methods: Descriptive data were collected retrospectively from the Hospital medical records between 2013 and 2016. Results: A total of 1833 patients were analyzed during the 3-year period, of which 1166 (63.6%) were males and 667 (36.4%) were females. A mean duration of stay in pediatric intensive care unit (PICU) was 2.21 ± 1.90 days. Respiratory system was the most common system affected in our study 738 (40.2 %). Acute poisoning in children constituted 99 patients (5.4%). We observed a mortality rate of 1.96%, with no association with age or sex. The mortality rate was highest in infants below 1-year of age (50%). In our study, the leading systemic cause for both admission and death was the respiratory system. Conclusion: This study analyses the epidemiological pattern of patients admitted to PICU in South India. We would also like to emphasize on public health prevention strategies and community health education which needs to be reinforced, especially in remote places and in rural India. This, in turn, would help in decreasing the cases of unknown bites, scorpion sting, poisoning and arthropod-borne illnesses, which are more prevalent in this part of the country.",
"title": ""
},
{
"docid": "46c2d96220d670115f9b4dba4e600ec8",
"text": "The primary purpose of this paper is to provide an in-depth analysis of different platforms available for performing big data analytics. This paper surveys different hardware platforms available for big data analytics and assesses the advantages and drawbacks of each of these platforms based on various metrics such as scalability, data I/O rate, fault tolerance, real-time processing, data size supported and iterative task support. In addition to the hardware, a detailed description of the software frameworks used within each of these platforms is also discussed along with their strengths and drawbacks. Some of the critical characteristics described here can potentially aid the readers in making an informed decision about the right choice of platforms depending on their computational needs. Using a star ratings table, a rigorous qualitative comparison between different platforms is also discussed for each of the six characteristics that are critical for the algorithms of big data analytics. In order to provide more insights into the effectiveness of each of the platform in the context of big data analytics, specific implementation level details of the widely used k-means clustering algorithm on various platforms are also described in the form pseudocode.",
"title": ""
},
{
"docid": "6a1a9c6cb2da06ee246af79fdeedbed9",
"text": "The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons. Keywords—Sentiment analysis; polarity detection; machine learning; support vector machine (SVM); support vector machine; SLR; systematic literature review",
"title": ""
},
{
"docid": "a112cd31e136054bdf9d34c82b960d95",
"text": "We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "b45bb513f7bd9de4941785490945d53e",
"text": "Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. However, current RNN models are ill-suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors that generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime.",
"title": ""
},
{
"docid": "8930924a223ef6a8d19e52ab5c6e7736",
"text": "Modern perception systems are notoriously complex, featuring dozens of interacting parameters that must be tuned to achieve good performance. Conventional tuning approaches require expensive ground truth, while heuristic methods are difficult to generalize. In this work, we propose an introspective ground-truth-free approach to evaluating the performance of a generic perception system. By using the posterior distribution estimate generated by a Bayesian estimator, we show that the expected performance can be estimated efficiently and without ground truth. Our simulated and physical experiments in a demonstrative indoor ground robot state estimation application show that our approach can order parameters similarly to using a ground-truth system, and is able to accurately identify top-performing parameters in varying contexts. In contrast, baseline approaches that reason only about observation log-likelihood fail in the face of challenging perceptual phenomena.",
"title": ""
},
{
"docid": "69bb10420be07fe9fb0fd372c606d04e",
"text": "Contextual text mining is concerned with extracting topical themes from a text collection with context information (e.g., time and location) and comparing/analyzing the variations of themes over different contexts. Since the topics covered in a document are usually related to the context of the document, analyzing topical themes within context can potentially reveal many interesting theme patterns. In this paper, we generalize some of these models proposed in the previous work and we propose a new general probabilistic model for contextual text mining that can cover several existing models as special cases. Specifically, we extend the probabilistic latent semantic analysis (PLSA) model by introducing context variables to model the context of a document. The proposed mixture model, called contextual probabilistic latent semantic analysis (CPLSA) model, can be applied to many interesting mining tasks, such as temporal text mining, spatiotemporal text mining, author-topic analysis, and cross-collection comparative analysis. Empirical experiments show that the proposed mixture model can discover themes and their contextual variations effectively.",
"title": ""
},
{
"docid": "242a2f64fc103af641320c1efe338412",
"text": "The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.",
"title": ""
},
{
"docid": "471e835e66b1bdfabd5de8a14914e9e6",
"text": "Context. The theme of the 2003 annual meeting is \"accountability for educational quality\". The emphasis on accountability reflects the increasing need for educators, students and politicians to demonstrate the effectiveness of educational systems. As part of the growing emphasis on accountability, high stakes achievement tests have become increasingly important and a student's performance on such tests can have a significant impact on his or her access to future educational opportunities. At the same time, concern is growing that the use of high stakes achievement tests, such as the SATMath exam and others (e.g., the Massachusetts MCAS exam) simply exacerbates existing group differences, and puts female students and those from traditionally underrepresented minority groups at a disadvantage (Willingham & Cole, 1997). New approaches are required to help all students perform to the best of their ability on high stakes tests.",
"title": ""
},
{
"docid": "c72a2e504934580f9542a62b7037cdd4",
"text": "Software defect prediction is one of the most active research areas in software engineering. We can build a prediction model with defect data collected from a software project and predict defects in the same project, i.e. within-project defect prediction (WPDP). Researchers also proposed cross-project defect prediction (CPDP) to predict defects for new projects lacking in defect data by using prediction models built by other projects. In recent studies, CPDP is proved to be feasible. However, CPDP requires projects that have the same metric set, meaning the metric sets should be identical between projects. As a result, current techniques for CPDP are difficult to apply across projects with heterogeneous metric sets. To address the limitation, we propose heterogeneous defect prediction (HDP) to predict defects across projects with heterogeneous metric sets. Our HDP approach conducts metric selection and metric matching to build a prediction model between projects with heterogeneous metric sets. Our empirical study on 28 subjects shows that about 68% of predictions using our approach outperform or are comparable to WPDP with statistical significance.",
"title": ""
},
{
"docid": "2258a0ba739557d489a796f050fad3e0",
"text": "The term fractional calculus is more than 300 years old. It is a generalization of the ordinary differentiation and integration to non-integer (arbitrary) order. The subject is as old as the calculus of differentiation and goes back to times when Leibniz, Gauss, and Newton invented this kind of calculation. In a letter to L’Hospital in 1695 Leibniz raised the following question (Miller and Ross, 1993): “Can the meaning of derivatives with integer order be generalized to derivatives with non-integer orders?\" The story goes that L’Hospital was somewhat curious about that question and replied by another question to Leibniz. “What if the order will be 1/2?\" Leibniz in a letter dated September 30, 1695 replied: “It will lead to a paradox, from which one day useful consequences will be drawn.\" The question raised by Leibniz for a fractional derivative was an ongoing topic in the last 300 years. Several mathematicians contributed to this subject over the years. People like Liouville, Riemann, and Weyl made major contributions to the theory of fractional calculus. The story of the fractional calculus continued with contributions from Fourier, Abel, Leibniz, Grünwald, and Letnikov. Nowadays, the fractional calculus attracts many scientists and engineers. There are several applications of this mathematical phenomenon in mechanics, physics, chemistry, control theory and so on (Caponetto et al., 2010; Magin, 2006; Monje et al., 2010; Oldham and Spanier, 1974; Oustaloup, 1995; Podlubny, 1999). It is natural that many authors tried to solve the fractional derivatives, fractional integrals and fractional differential equations in Matlab. A few very good and interesting Matlab functions were already submitted to the MathWorks, Inc. Matlab Central File Exchange, where they are freely downloadable for sharing among the users. In this chapter we will use some of them. It is worth mentioning some addition to Matlab toolboxes, which are appropriate for the solution of fractional calculus problems. One of them is a toolbox created by CRONE team (CRONE, 2010) and another one is the Fractional State–Space Toolkit developed by Dominik Sierociuk (Sierociuk, 2005). Last but not least we should also mention a Matlab toolbox created by Dingyü Xue (Xue, 2010), which is based on Matlab object for fractional-order transfer function and some manipulation with this class of the transfer function. Despite that the mentioned toolboxes are mainly for control systems, they can be “abused\" for solutions of general problems related to fractional calculus as well. 10",
"title": ""
},
{
"docid": "322fd3b0c6c833bac9598b510dc40b98",
"text": "Quality assessment is an indispensable technique in a large body of media applications, i.e., photo retargeting, scenery rendering, and video summarization. In this paper, a fully automatic framework is proposed to mimic how humans subjectively perceive media quality. The key is a locality-preserved sparse encoding algorithm that accurately discovers human gaze shifting paths from each image or video clip. In particular, we first extract local image descriptors from each image/video, and subsequently project them into the so-called perceptual space. Then, a nonnegative matrix factorization (NMF) algorithm is proposed that represents each graphlet by a linear and sparse combination of the basis ones. Since each graphlet is visually/semantically similar to its neighbors, a locality-preserved constraint is encoded into the NMF algorithm. Mathematically, the saliency of each graphlet is quantified by the norm of its sparse codes. Afterward, we sequentially link them into a path to simulate human gaze allocation. Finally, a probabilistic quality model is learned based on such paths extracted from a collection of photos/videos, which are marked as high quality ones via multiple Flickr users. Comprehensive experiments have demonstrated that: 1) our quality model outperforms many of its competitors significantly, and 2) the learned paths are on average 89.5% consistent with real human gaze shifting paths.",
"title": ""
},
{
"docid": "013f9499b9a3e1ffdd03aa4de48d233b",
"text": "We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a \"sanitization\" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. The sanitization may be in the form of an arbitrary data structure, accompanied by a computational procedure for determining approximate answers to queries on the original data set, or it may be a \"synthetic data set\" consisting of data items drawn from the same universe as items in the original data set; queries are carried out as if the synthetic data set were the actual input. In either case the process is non-interactive; once the sanitization has been released the original data and the curator play no further role.\n For the task of sanitizing with a synthetic dataset output, we map the boundary between computational feasibility and infeasibility with respect to a variety of utility measures. For the (potentially easier) task of sanitizing with unrestricted output format, we show a tight qualitative and quantitative connection between hardness of sanitizing and the existence of traitor tracing schemes.",
"title": ""
},
{
"docid": "ec4dcce4f53e38909be438beeb62b1df",
"text": " A very efficient protocol for plant regeneration from two commercial Humulus lupulus L. (hop) cultivars, Brewers Gold and Nugget has been established, and the morphogenetic potential of explants cultured on Adams modified medium supplemented with several concentrations of cytokinins and auxins studied. Zeatin at 4.56 μm produced direct caulogenesis and caulogenic calli in both cultivars. Subculture of these calli on Adams modified medium supplemented with benzylaminopurine (4.4 μm) and indolebutyric acid (0.49 μm) promoted shoot regeneration which gradually increased up to the third subculture. Regeneration rates of 60 and 29% were achieved for Nugget and Brewers Gold, respectively. By selection of callus lines, it has been possible to maintain caulogenic potential for 14 months. Regenerated plants were successfully transferred to field conditions.",
"title": ""
},
{
"docid": "9c3172266da959ee3cf9e7316bbcba96",
"text": "We propose a new research direction for eye-typing which is potentially much faster: dwell-free eye-typing. Dwell-free eye-typing is in principle possible because we can exploit the high redundancy of natural languages to allow users to simply look at or near their desired letters without stopping to dwell on each letter. As a first step we created a system that simulated a perfect recognizer for dwell-free eye-typing. We used this system to investigate how fast users can potentially write using a dwell-free eye-typing interface. We found that after 40 minutes of practice, users reached a mean entry rate of 46 wpm. This indicates that dwell-free eye-typing may be more than twice as fast as the current state-of-the-art methods for writing by gaze. A human performance model further demonstrates that it is highly unlikely traditional eye-typing systems will ever surpass our dwell-free eye-typing performance estimate.",
"title": ""
},
{
"docid": "681aba7f37ae6807824c299454af5721",
"text": "Due to their rapid growth and deployment, Internet of things (IoT) devices have become a central aspect of our daily lives. However, they tend to have many vulnerabilities which can be exploited by an attacker. Unsupervised techniques, such as anomaly detection, can help us secure the IoT devices. However, an anomaly detection model must be trained for a long time in order to capture all benign behaviors. This approach is vulnerable to adversarial attacks since all observations are assumed to be benign while training the anomaly detection model. In this paper, we propose CIoTA, a lightweight framework that utilizes the blockchain concept to perform distributed and collaborative anomaly detection for devices with limited resources. CIoTA uses blockchain to incrementally update a trusted anomaly detection model via self-attestation and consensus among IoT devices. We evaluate CIoTA on our own distributed IoT simulation platform, which consists of 48 Raspberry Pis, to demonstrate CIoTA’s ability to enhance the security of each device and the security of the network as a whole.",
"title": ""
},
{
"docid": "7c482427e4f0305c32210093e803eb78",
"text": "A healable transparent capacitive touch screen sensor has been fabricated based on a healable silver nanowire-polymer composite electrode. The composite electrode features a layer of silver nanowire percolation network embedded into the surface layer of a polymer substrate comprising an ultrathin soldering polymer layer to confine the nanowires to the surface of a healable Diels-Alder cycloaddition copolymer and to attain low contact resistance between the nanowires. The composite electrode has a figure-of-merit sheet resistance of 18 Ω/sq with 80% transmittance at 550 nm. A surface crack cut on the conductive surface with 18 Ω is healed by heating at 100 °C, and the sheet resistance recovers to 21 Ω in 6 min. A healable touch screen sensor with an array of 8×8 capacitive sensing points is prepared by stacking two composite films patterned with 8 rows and 8 columns of coupling electrodes at 90° angle. After deliberate damage, the coupling electrodes recover touch sensing function upon heating at 80 °C for 30 s. A capacitive touch screen based on Arduino is demonstrated capable of performing quick recovery from malfunction caused by a razor blade cutting. After four cycles of cutting and healing, the sensor array remains functional.",
"title": ""
},
{
"docid": "d8127fc372994baee6fd8632d585a347",
"text": "Dynamic query interfaces (DQIs) form a recently developed method of database access that provides continuous realtime feedback to the user during the query formulation process. Previous work shows that DQIs are elegant and powerful interfaces to small databases. Unfortunately, when applied to large databases, previous DQI algorithms slow to a crawl. We present a new approach to DQI algorithms that works well with large databases.",
"title": ""
}
] |
scidocsrr
|
960a3a7ce3d5b120870b78958c1dc9f8
|
Running Head : Experiential learning critics Experiential Learning and Its Critics : Preserving the Role of Experience in Management Learning and Education
|
[
{
"docid": "c8009d5823d7af91dc9b56a4d19eed27",
"text": "Built to Last's answer is to consciously build a compmy with even more care than the hotels, airplanes, or computers from which the company earns revenue. Building a company requires much more than hiring smart employees and aggressive salespeople. Visionary companies consider the personality of their potential employees and how they will fare in the company culture. They treasure employees dedicated to the company's mission, while those that don't are \" ejected like a virus. \" They carefully choose goals and develop cultures that encourage innovation and experimentation. Visionary companies plan for the future, measure their current production, and revise plans when conditions change. Much like the TV show Biography, Built to Last gives fascinating historical insight into the birth and growth of The most radical of the three books I reviewed, The Fifth Discipline, can fundamentally change the way you view the world. The Flremise is that businesses, schools, gopernments, and other organizations can best succeed if they are learning organizations. The Fifth Discipline is Peter Senge's vehicle for explaining how five complementary components-systems thinking, personal mastery, mental models, shared vision, and team learning-can support continuous learning and therefore sustainable iniprovement. Senge, a professor a t MIT's Sloan School of Government and a director of the Society for Organizational Learning, looks beyont: simple cause-and-effect explanation:j and instead advocates \" systems thinking \" to discover a more complete understanding of how and why events occur. Systems thinkers go beyond the data readily available, question assumptions, and try to identify the many types of activities that can occur simultaneously. The need for such a worldview is made clear early in the book with the role-playing \" beer game. \" In this game, three participants play the roles of store manager, beverage distributor, and beer brewer. Each has information that would typically he available: the store manager knows how many cases of beer are in inventory , how many are on order, and how many were sold in the last week. The distributor tracks the orders placed with the brewery, inventory, orders received this week from each store, and so on. As the customers' demands vary, the manager, distributor, and brewer make what seem to be reasonable decisions to change the amount they order or brew. Thousands of people have played this and, unfortunately, the results are extremely consistent. As each player tries to maximize profits, each fails to consider how his …",
"title": ""
}
] |
[
{
"docid": "792cb4f62ad83e0ee0c94b60626103b9",
"text": "Microservices have become a popular pattern for deploying scale-out application logic and are used at companies like Netflix, IBM, and Google. An advantage of using microservices is their loose coupling, which leads to agile and rapid evolution, and continuous re-deployment. However, developers are tasked with managing this evolution and largely do so manually by continuously collecting and evaluating low-level service behaviors. This is tedious, error-prone, and slow. We argue for an approach based on service evolution modeling in which we combine static and dynamic information to generate an accurate representation of the evolving microservice-based system. We discuss how our approach can help engineers manage service upgrades, architectural evolution, and changing deployment trade-offs.",
"title": ""
},
{
"docid": "e5bca6cf6a12f2c5efec9d6be8936a14",
"text": "For several years many of us at Peabody College have participated in the evolution of a theory of community, the first conceptualization of which was presented in a working paper (McMillan, 1976) of the Center for Community Studies. To support the proposed definition, McMillan focused on the literature on group cohesiveness, and we build here on that original definition. This article attempts to describe the dynamics of the sense-ofcommunity force-to identify the various elements in the force and to describe the process by which these elements work together to produce the experience of sense of community.",
"title": ""
},
{
"docid": "13572c74a989b8677eec026788b381fe",
"text": "We examined the effect of stereotype threat on blood pressure reactivity. Compared with European Americans, and African Americans under little or no stereotype threat, African Americans under stereotype threat exhibited larger increases in mean arterial blood pressure during an academic test, and performed more poorly on difficult test items. We discuss the significance of these findings for understanding the incidence of hypertension among African Americans.",
"title": ""
},
{
"docid": "6c106d560d8894d941851386d96afe2b",
"text": "Cooperative vehicular networks require the exchange of positioning and basic status information between neighboring nodes to support higher layer protocols and applications, including active safety applications. The information exchange is based on the periodic transmission/reception of 1-hop broadcast messages on the so called control channel. The dynamic adaptation of the transmission parameters of such messages will be key for the reliable and efficient operation of the system. On one hand, congestion control protocols need to be applied to control the channel load, typically through the adaptation of the transmission parameters based on certain channel load metrics. On the other hand, awareness control protocols are also required to adequately support cooperative vehicular applications. Such protocols typically adapt the transmission parameters of periodic broadcast messages to ensure each vehicle's capacity to detect, and possibly communicate, with the relevant vehicles and infrastructure nodes present in its local neighborhood. To date, congestion and awareness control protocols have been normally designed and evaluated separately, although both will be required for the reliable and efficient operation of the system. To this aim, this paper proposes and evaluates INTERN, a new control protocol that integrates two congestion and awareness control processes. The simulation results obtained demonstrate that INTERN is able to satisfy the application's requirements of all vehicles, while effectively controlling the channel load.",
"title": ""
},
{
"docid": "2c44d1d76a76f0d107de530bc55253d6",
"text": "The cell biology of caveolae is a rapidly growing area of biomedical research. Caveolae are known primarily for their ability to transport molecules across endothelial cells, but modern cellular techniques have dramatically extended our view of caveolae. They form a unique endocytic and exocytic compartment at the surface of most cells and are capable of importing molecules and delivering them to specific locations within the cell, exporting molecules to extracellular space, and compartmentalizing a variety of signaling activities. They are not simply an endocytic device with a peculiar membrane shape but constitute an entire membrane system with multiple functions essential for the cell. Specific diseases attack this system: Pathogens have been identified that use it as a means of gaining entrance to the cell. Trying to understand the full range of functions of caveolae challenges our basic instincts about the cell.",
"title": ""
},
{
"docid": "41151075093db357c19fdaedd0d930aa",
"text": "Natural language interfaces for relational databases have been explored for several decades. Majority of the work have focused on translating natural language sentences to SQL queries or narrating SQL queries in natural language. Scant attention has been paid for natural language understanding of query execution plans (QEP) of SQL queries. In this demonstration, we present a novel generic system called NEURON that facilitates natural language interaction with QEPs. NEURON accepts a SQL query (which may include joins, aggregation, nesting, among other things) as input, executes it, and generates a natural language-based description (both in text and voice form) of the execution strategy deployed by the underlying RDBMS. Furthermore, it facilitates understanding of various features related to the QEP through a natural language-based question answering framework. NEURON can be potentially useful to database application developers in comprehending query execution strategies and to database instructors and students for pedagogical support.",
"title": ""
},
{
"docid": "03ebf532ff9df2cdd0f2a28cb2f55450",
"text": "We develop two variants of an energy-efficient cooperative diversity protocol that combats fading induced by multipath propagation in wireless networks. The underlying techniques build upon the classical relay channel and related work and exploit space diversity available at distributed antennas through coordinated transmission and processing by cooperating radios. While applicable to any wireless setting, these protocols are particularly attractive in ad-hoc or peer-to-peer wireless networks, in which radios are typically constrained to employ a single antenna. Substantial energy-savings resulting from these protocols can lead to reduced battery drain, longer network lifetime, and improved network performance in terms of, e.g., capacity.",
"title": ""
},
{
"docid": "ff93e77bb0e0b24a06780a05cc16123d",
"text": "Models in science may be used for various purposes: organizing data, synthesizing information, and making predictions. However, the value of model predictions is undermined by their uncertainty, which arises primarily from the fact that our models of complex natural systems are always open. Models can never fully specify the systems that they describe, and therefore their predictions are always subject to uncertainties that we cannot fully specify. Moreover, the attempt to make models capture the complexities of natural systems leads to a paradox: the more we strive for realism by incorporating as many as possible of the different processes and parameters that we believe to be operating in the system, the more difficult it is for us to know if our tests of the model are meaningful. A complex model may be more realistic, yet it is ironic that as we add more factors to a model, the certainty of its predictions may decrease even as our intuitive faith in the model increases. For this and other reasons, model output should not be viewed as an accurate prediction of the future state of the system. Short timeframe model output can and should be used to evaluate models and suggest avenues for future study. Model output can also generate “what if” scenarios that can help to evaluate alternative courses of action (or inaction), including worst-case and best-case outcomes. But scientists should eschew long-range deterministic predictions, which are likely to be erroneous and may damage the credibility of the communities that generate them.",
"title": ""
},
{
"docid": "d34cc5c09e882c167b3ff273f5c52159",
"text": "Received: 23 May 2011 Revised: 20 February 2012 2nd Revision: 7 September 2012 3rd Revision: 6 November 2012 Accepted: 7 November 2012 Abstract Competitive pressures are forcing organizations to be flexible. Being responsive to changing environmental conditions is an important factor in determining corporate performance. Earlier research, focusing primarily on IT infrastructure, has shown that organizational flexibility is closely related to IT infrastructure flexibility. Using real-world cases, this paper explores flexibility in the broader context of the IS function. An empirically derived framework for better understanding and managing IS flexibility is developed using grounded theory and content analysis. A process model for managing flexibility is presented; it includes steps for understanding contextual factors, recognizing reasons why flexibility is important, evaluating what needs to be flexible, identifying flexibility categories and stakeholders, diagnosing types of flexibility needed, understanding synergies and tradeoffs between them, and prescribing strategies for proactively managing IS flexibility. Three major flexibility categories, flexibility in IS operations, flexibility in IS systems & services development and deployment, and flexibility in IS management, containing 10 IS flexibility types are identified and described. European Journal of Information Systems (2014) 23, 151–184. doi:10.1057/ejis.2012.53; published online 8 January 2013",
"title": ""
},
{
"docid": "cb266f07461a58493d35f75949c4605e",
"text": "Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.",
"title": ""
},
{
"docid": "f4f6b02fa03eb83e21e5391f1e25b847",
"text": "To access the knowledge contained in developer communication, such as forum posts, it is useful to determine automatically the code elements referred to in the discussions. We propose a novel traceability recovery approach to extract the code elements contained in various documents. As opposed to previous work, our approach does not require an index of code elements to find links, which makes it particularly well-suited for the analysis of informal documentation. When evaluated on 188 StackOverflow answer posts containing 993 code elements, the technique performs with average 0.92 precision and 0.90 recall. As a major refinement on traditional traceability approaches, we also propose to detect which of the code elements in a document are salient, or germane, to the topic of the post. To this end we developed a three-feature decision tree classifier that performs with a precision of 0.65-0.74 and recall of 0.30-0.65, depending on the subject of the document.",
"title": ""
},
{
"docid": "0de919048191a4bbbb83a1f0e7fa9522",
"text": "In this paper, we propose a novel threat model-driven security testing approach for detecting undesirable threat behavior at runtime. Threats to security policies are modelled with UML (Unified Modeling Language) sequence diagrams. From a design-level threat model we extract a set of threat traces, each of which is an event sequence that should not occur during the system execution. The same threat model is also used to decide what kind of information should be collected at runtime and to guide the code instrumentation. The instrumented code is recompiled and executed using test cases randomly generated. The execution traces are collected and analyzed to verify whether the aforementioned undesirable threat traces are matched. If an execution trace is an instance of a threat trace, security violations are reported and actions should be taken to mitigate the threat in the system. Thus the linkage between models, code implementations, and security testing are extended to form a systematic methodology that can test certain security policies.",
"title": ""
},
{
"docid": "a8ad71932fa864edc2349abcc366c509",
"text": "In response to increasingly sophisticated state-sponsored Internet censorship, recent work has proposed a new approach to censorship resistance: end-to-middle proxying. This concept, developed in systems such as Telex, Decoy Routing, and Cirripede, moves anticensorship technology into the core of the network, at large ISPs outside the censoring country. In this paper, we focus on two technical obstacles to the deployment of certain end-to-middle schemes: the need to selectively block flows and the need to observe both directions of a connection. We propose a new construction, TapDance, that removes these requirements. TapDance employs a novel TCP-level technique that allows the anticensorship station at an ISP to function as a passive network tap, without an inline blocking component. We also apply a novel steganographic encoding to embed control messages in TLS ciphertext, allowing us to operate on HTTPS connections even under asymmetric routing. We implement and evaluate a TapDance prototype that demonstrates how the system could function with minimal impact on an ISP’s network operations.",
"title": ""
},
{
"docid": "85576e6b36757f0a475e7482e4827a91",
"text": "Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generation — the semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus is able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and ChineseEnglish translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT’14 English-German translation, the SAT achieves 5.58× speedup while maintains 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).",
"title": ""
},
{
"docid": "17a6ac933c6aa864180ba3ae05a99366",
"text": "A formal approach to security in the software life cycle is essential to protect corporate resources. However, little thought has been given to this aspect of software development. Traditionally, software security has been treated as an afterthought leading to a cycle of ‘penetrate and patch.’ Due to its criticality, security should be integrated as a formal approach in the software life cycle. Both a software security checklist and assessment tools should be incorporated into this life cycle process. The current research at JPL addresses both of these areas through the development of a Software Security Assessment Instrument (SSAI). This paper focuses on the development of a Software Security Checklist (SSC) for the life cycle. It includes the critical areas of requirements gathering and specification, design and code issues, and maintenance and decommissioning of software and systems.",
"title": ""
},
{
"docid": "f465475eb7bb52d455e3ed77b4808d26",
"text": "Background Long-term dieting has been reported to reduce resting energy expenditure (REE) leading to weight regain once the diet has been curtailed. Diets are also difficult to follow for a significant length of time. The purpose of this preliminary proof of concept study was to examine the effects of short-term intermittent dieting during exercise training on REE and weight loss in overweight women.",
"title": ""
},
{
"docid": "f3bfb1542c5254997fadcc8533007972",
"text": "For most entity disambiguation systems, the secret recipes are feature representations for mentions and entities, most of which are based on Bag-of-Words (BoW) representations. Commonly, BoW has several drawbacks: (1) It ignores the intrinsic meaning of words/entities; (2) It often results in high-dimension vector spaces and expensive computation; (3) For different applications, methods of designing handcrafted representations may be quite different, lacking of a general guideline. In this paper, we propose a different approach named EDKate. We first learn low-dimensional continuous vector representations for entities and words by jointly embedding knowledge base and text in the same vector space. Then we utilize these embeddings to design simple but effective features and build a two-layer disambiguation model. Extensive experiments on real-world data sets show that (1) The embedding-based features are very effective. Even a single one embedding-based feature can beat the combination of several BoW-based features. (2) The superiority is even more promising in a difficult set where the mention-entity prior cannot work well. (3) The proposed embedding method is much better than trivial implementations of some off-the-shelf embedding algorithms. (4) We compared our EDKate with existing methods/systems and the results are also positive.",
"title": ""
},
{
"docid": "f2ba236803a453c2b351aa910fdfa32d",
"text": "This study presents PV power based cuk converter for dc load application. The maximum power from the sun radiation is obtained by sun tracking and Maximum Power Point Tracking (MPPT). The sun tracking is implemented by the stepper motor control and MPPT is implemented by the Cuk converter and the load voltage is maintained constant irrespective of the variation in solar power. This technique improves the dynamic and steady state characteristics of the system. The simulation was done in MATLAB simulink and the experiments are carried out and the results are presented.",
"title": ""
},
{
"docid": "126d8080f7dd313d534a95d8989b0fbd",
"text": "Intrusion prevention mechanisms are largely insufficient for protection of databases against Information Warfare attacks by authorized users and has drawn interest towards intrusion detection. We visualize the conflicting motives between an attacker and a detection system as a multi-stage game between two players, each trying to maximize his payoff. We consider the specific application of credit card fraud detection and propose a fraud detection system based on a game-theoretic approach. Not only is this approach novel in the domain of Information Warfare, but also it improvises over existing rule-based systems by predicting the next move of the fraudster and learning at each step.",
"title": ""
}
] |
scidocsrr
|
1f18e5170c0de6160d9360e87e80eca2
|
MODEC: Multimodal Decomposable Models for Human Pose Estimation
|
[
{
"docid": "ba085cc5591471b8a46e391edf2e78d4",
"text": "Despite recent successes, pose estimators are still somewhat fragile, and they frequently rely on a precise knowledge of the location of the object. Unfortunately, articulated objects are also very difficult to detect. Knowledge about the articulated nature of these objects, however, can substantially contribute to the task of finding them in an image. It is somewhat surprising, that these two tasks are usually treated entirely separately. In this paper, we propose an Articulated Part-based Model (APM) for jointly detecting objects and estimating their poses. APM recursively represents an object as a collection of parts at multiple levels of detail, from coarse-to-fine, where parts at every level are connected to a coarser level through a parent-child relationship (Fig. 1(b)-Horizontal). Parts are further grouped into part-types (e.g., left-facing head, long stretching arm, etc) so as to model appearance variations (Fig. 1(b)-Vertical). By having the ability to share appearance models of part types and by decomposing complex poses into parent-child pairwise relationships, APM strikes a good balance between model complexity and model richness. Extensive quantitative and qualitative experiment results on public datasets show that APM outperforms state-of-the-art methods. We also show results on PASCAL 2007 - cats and dogs - two highly challenging articulated object categories.",
"title": ""
}
] |
[
{
"docid": "371ab49af58c0eb4dc55f3fdf1c741f0",
"text": "Reinforcement learning has shown promise in learning policies that can solve complex problems. However, manually specifying a good reward function can be difficult, especially for intricate tasks. Inverse reinforcement learning offers a useful paradigm to learn the underlying reward function directly from expert demonstrations. Yet in reality, the corpus of demonstrations may contain trajectories arising from a diverse set of underlying reward functions rather than a single one. Thus, in inverse reinforcement learning, it is useful to consider such a decomposition. The options framework in reinforcement learning is specifically designed to decompose policies in a similar light. We therefore extend the options framework and propose a method to simultaneously recover reward options in addition to policy options. We leverage adversarial methods to learn joint reward-policy options using only observed expert states. We show that this approach works well in both simple and complex continuous control tasks and shows significant performance increases in one-shot transfer learning.",
"title": ""
},
{
"docid": "1047e89937593d2e08c5433652316d73",
"text": "We describe a set of top-performing systems at the SemEval 2015 English Semantic Textual Similarity (STS) task. Given two English sentences, each system outputs the degree of their semantic similarity. Our unsupervised system, which is based on word alignments across the two input sentences, ranked 5th among 73 submitted system runs with a mean correlation of 79.19% with human annotations. We also submitted two runs of a supervised system which uses word alignments and similarities between compositional sentence vectors as its features. Our best supervised run ranked 1st with a mean correlation of 80.15%.",
"title": ""
},
{
"docid": "82e170219f7fefdc2c36eb89e44fa0f5",
"text": "The Internet of Things (IOT), the idea of getting real-world objects connected with each other, will change the ways we organize, obtain and consume information radically. Through sensor networks, agriculture can be connected to the IOT, which allows us to create connections among agronomists, farmers and crops regardless of their geographical differences. With the help of the connections, the agronomists will have better understanding of crop growth models and farming practices will be improved as well. This paper reports on the design of the sensor network when connecting agriculture to the IOT. Reliability, management, interoperability, low cost and commercialization are considered in the design. Finally, we share our experiences in both development and deployment.",
"title": ""
},
{
"docid": "70df369be2c95afd04467cd291e60175",
"text": "In this paper, we introduce two novel metric learning algorithms, χ-LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: χ-LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear χ-distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach’s robustness, speed, parallelizability and insensitivity towards the single additional hyperparameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of χ-LMNN, obtain best results in 19 out of 20 learning settings.",
"title": ""
},
{
"docid": "416f9184ae6b0c04803794b1ab2b8f50",
"text": "Although hydrophilic small molecule drugs are widely used in the clinic, their rapid clearance, suboptimal biodistribution, low intracellular absorption and toxicity can limit their therapeutic efficacy. These drawbacks can potentially be overcome by loading the drug into delivery systems, particularly liposomes; however, low encapsulation efficiency usually results. Many strategies are available to improve both the drug encapsulation efficiency and delivery to the target site to reduce side effects. For encapsulation, passive and active strategies are available. Passive strategies encompass the proper selection of the composition of the formulation, zeta potential, particle size and preparation method. Moreover, many weak acids and bases, such as doxorubicin, can be actively loaded with high efficiency. It is highly desirable that once the drug is encapsulated, it should be released preferentially at the target site, resulting in an optimal therapeutic effect devoid of side effects. For this purpose, targeted and triggered delivery approaches are available. The rapidly increasing knowledge of the many overexpressed biochemical makers in pathological sites, reviewed herein, has enabled the development of liposomes decorated with ligands for cell-surface receptors and active delivery. Furthermore, many liposomal formulations have been designed to actively release their content in response to specific stimuli, such as a pH decrease, heat, external alternating magnetic field, ultrasound or light. More than half a century after the discovery of liposomes, some hydrophilic small molecule drugs loaded in liposomes with high encapsulation efficiency are available on the market. However, targeted liposomes or formulations able to deliver the drug after a stimulus are not yet a reality in the clinic and are still awaited.",
"title": ""
},
{
"docid": "2d95b9919e1825ea46b5c5e6a545180c",
"text": "Computed tomography (CT) generates a stack of cross-sectional images covering a region of the body. The visual assessment of these images for the identification of potential abnormalities is a challenging and time consuming task due to the large amount of information that needs to be processed. In this article we propose a deep artificial neural network architecture, ReCTnet, for the fully-automated detection of pulmonary nodules in CT scans. The architecture learns to distinguish nodules and normal structures at the pixel level and generates three-dimensional probability maps highlighting areas that are likely to harbour the objects of interest. Convolutional and recurrent layers are combined to learn expressive image representations exploiting the spatial dependencies across axial slices. We demonstrate that leveraging intra-slice dependencies substantially increases the sensitivity to detect pulmonary nodules without inflating the false positive rate. On the publicly available LIDC/IDRI dataset consisting of 1,018 annotated CT scans, ReCTnet reaches a detection sensitivity of 90.5% with an average of 4.5 false positives per scan. Comparisons with a competing multi-channel convolutional neural network for multislice segmentation and other published methodologies using the same dataset provide evidence that ReCTnet offers significant performance gains. 1 ar X iv :1 60 9. 09 14 3v 1 [ st at .M L ] 2 8 Se p 20 16",
"title": ""
},
{
"docid": "96aa1f19a00226af7b5bbe0bb080582e",
"text": "CONTEXT\nComprehensive discharge planning by advanced practice nurses has demonstrated short-term reductions in readmissions of elderly patients, but the benefits of more intensive follow-up of hospitalized elders at risk for poor outcomes after discharge has not been studied.\n\n\nOBJECTIVE\nTo examine the effectiveness of an advanced practice nurse-centered discharge planning and home follow-up intervention for elders at risk for hospital readmissions.\n\n\nDESIGN\nRandomized clinical trial with follow-up at 2, 6, 12, and 24 weeks after index hospital discharge.\n\n\nSETTING\nTwo urban, academically affiliated hospitals in Philadelphia, Pa.\n\n\nPARTICIPANTS\nEligible patients were 65 years or older, hospitalized between August 1992 and March 1996, and had 1 of several medical and surgical reasons for admission.\n\n\nINTERVENTION\nIntervention group patients received a comprehensive discharge planning and home follow-up protocol designed specifically for elders at risk for poor outcomes after discharge and implemented by advanced practice nurses.\n\n\nMAIN OUTCOME MEASURES\nReadmissions, time to first readmission, acute care visits after discharge, costs, functional status, depression, and patient satisfaction.\n\n\nRESULTS\nA total of 363 patients (186 in the control group and 177 in the intervention group) were enrolled in the study; 70% of intervention and 74% of control subjects completed the trial. Mean age of sample was 75 years; 50% were men and 45% were black. By week 24 after the index hospital discharge, control group patients were more likely than intervention group patients to be readmitted at least once (37.1 % vs 20.3 %; P<.001). Fewer intervention group patients had multiple readmissions (6.2% vs 14.5%; P = .01) and the intervention group had fewer hospital days per patient (1.53 vs 4.09 days; P<.001). Time to first readmission was increased in the intervention group (P<.001). At 24 weeks after discharge, total Medicare reimbursements for health services were about $1.2 million in the control group vs about $0.6 million in the intervention group (P<.001). There were no significant group differences in post-discharge acute care visits, functional status, depression, or patient satisfaction.\n\n\nCONCLUSIONS\nAn advanced practice nurse-centered discharge planning and home care intervention for at-risk hospitalized elders reduced readmissions, lengthened the time between discharge and readmission, and decreased the costs of providing health care. Thus, the intervention demonstrated great potential in promoting positive outcomes for hospitalized elders at high risk for rehospitalization while reducing costs.",
"title": ""
},
{
"docid": "630c4e87333606c6c8e7345cb0865c64",
"text": "MapReduce plays a critical role as a leading framework for big data analytics. In this paper, we consider a geodistributed cloud architecture that provides MapReduce services based on the big data collected from end users all over the world. Existing work handles MapReduce jobs by a traditional computation-centric approach that all input data distributed in multiple clouds are aggregated to a virtual cluster that resides in a single cloud. Its poor efficiency and high cost for big data support motivate us to propose a novel data-centric architecture with three key techniques, namely, cross-cloud virtual cluster, data-centric job placement, and network coding based traffic routing. Our design leads to an optimization framework with the objective of minimizing both computation and transmission cost for running a set of MapReduce jobs in geo-distributed clouds. We further design a parallel algorithm by decomposing the original large-scale problem into several distributively solvable subproblems that are coordinated by a high-level master problem. Finally, we conduct real-world experiments and extensive simulations to show that our proposal significantly outperforms the existing works.",
"title": ""
},
{
"docid": "3ea533be157b63e673f43205d195d13e",
"text": "Recent work on fairness in machine learning has begun to be extended to recommender systems. While there is a tension between the goals of fairness and of personalization, there are contexts in which a global evaluations of outcomes is possible and where equity across such outcomes is a desirable goal. In this paper, we introduce the concept of a balanced neighborhood as a mechanism to preserve personalization in recommendation while enhancing the fairness of recommendation outcomes. We show that a modified version of the SLIM algorithm can be used to improve the balance of user neighborhoods, with the result of achieving greater outcome fairness in a real-world dataset with minimal loss in ranking performance.",
"title": ""
},
{
"docid": "1a6e9229f6bc8f6dc0b9a027e1d26607",
"text": "− This work illustrates an analysis of Rogowski coils for power applications, when operating under non ideal measurement conditions. The developed numerical model, validated by comparison with other methods and experiments, enables to investigate the effects of the geometrical and constructive parameters on the measurement behavior of the coil.",
"title": ""
},
{
"docid": "ce53aa803d587301a47166c483ecec34",
"text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.",
"title": ""
},
{
"docid": "6091748ab964ea58a06f9b8335f9829e",
"text": "Apprenticeship is an inherently social learning method with a long history of helping novices become experts in fields as diverse as midwifery, construction, and law. At the center of apprenticeship is the concept of more experienced people assisting less experienced ones, providing structure and examples to support the attainment of goals. Traditionally apprenticeship has been associated with learning in the context of becoming skilled in a trade or craft—a task that typically requires both the acquisition of knowledge, concepts, and perhaps psychomotor skills and the development of the ability to apply the knowledge and skills in a context-appropriate manner—and far predates formal schooling as it is known today. In many nonindustrialized nations apprenticeship remains the predominant method of teaching and learning. However, the overall concept of learning from experts through social interactions is not one that should be relegated to vocational and trade-based training while K–12 and higher educational institutions seek to prepare students for operating in an information-based society. Apprenticeship as a method of teaching and learning is just as relevant within the cognitive and metacognitive domain as it is in the psychomotor domain. In the last 20 years, the recognition and popularity of facilitating learning of all types through social methods have grown tremendously. Educators and educational researchers have looked to informal learning settings, where such methods have been in continuous use, as a basis for creating more formal instructional methods and activities that take advantage of these social constructivist methods. Cognitive apprenticeship— essentially, the use of an apprentice model to support learning in the cognitive domain—is one such method that has gained respect and popularity throughout the 1990s and into the 2000s. Scaffolding, modeling, mentoring, and coaching are all methods of teaching and learning that draw on social constructivist learning theory. As such, they promote learning that occurs through social interactions involving negotiation of content, understanding, and learner needs, and all three generally are considered forms of cognitive apprenticeship (although certainly they are not the only methods). This chapter first explores prevailing definitions and underlying theories of these teaching and learning strategies and then reviews the state of research in these area.",
"title": ""
},
{
"docid": "5c5e9a93b4838cbebd1d031a6d1038c4",
"text": "Live migration of virtual machines (VMs) is key feature of virtualization that is extensively leveraged in IaaS cloud environments: it is the basic building block of several important features, such as load balancing, pro-active fault tolerance, power management, online maintenance, etc. While most live migration efforts concentrate on how to transfer the memory from source to destination during the migration process, comparatively little attention has been devoted to the transfer of storage. This problem is gaining increasing importance: due to performance reasons, virtual machines that run large-scale, data-intensive applications tend to rely on local storage, which poses a difficult challenge on live migration: it needs to handle storage transfer in addition to memory transfer. This paper proposes a memory migration independent approach that addresses this challenge. It relies on a hybrid active push / prioritized prefetch strategy, which makes it highly resilient to rapid changes of disk state exhibited by I/O intensive workloads. At the same time, it is minimally intrusive in order to ensure a maximum of portability with a wide range of hypervisors. Large scale experiments that involve multiple simultaneous migrations of both synthetic benchmarks and a real scientific application show improvements of up to 10x faster migration time, 10x less bandwidth consumption and 8x less performance degradation over state-of-art.",
"title": ""
},
{
"docid": "26c003f70bbaade54b84dcb48d2a08c9",
"text": "Tricaine methanesulfonate (TMS) is an anesthetic that is approved for provisional use in some jurisdictions such as the United States, Canada, and the United Kingdom (UK). Many hatcheries and research studies use TMS to immobilize fish for marking or transport and to suppress sensory systems during invasive procedures. Improper TMS use can decrease fish viability, distort physiological data, or result in mortalities. Because animals may be anesthetized by junior staff or students who may have little experience in fish anesthesia, training in the proper use of TMS may decrease variability in recovery, experimental results and increase fish survival. This document acts as a primer on the use of TMS for anesthetizing juvenile salmonids, with an emphasis on its use in surgical applications. Within, we briefly describe many aspects of TMS including the legal uses for TMS, and what is currently known about the proper storage and preparation of the anesthetic. We outline methods and precautions for administration and changes in fish behavior during progressively deeper anesthesia and discuss the physiological effects of TMS and its potential for compromising fish health. Despite the challenges of working with TMS, it is currently one of the few legal options available in the USA and in other countries until other anesthetics are approved and is an important tool for the intracoelomic implantation of electronic tags in fish.",
"title": ""
},
{
"docid": "181a3d68fd5b5afc3527393fc3b276f9",
"text": "Updating inference in response to new evidence is a fundamental challenge in artificial intelligence. Many real problems require large probabilistic graphical models, containing possibly millions of interdependent variables. For such large models, jointly updating the most likely (i.e., MAP) configuration of the variables each time new evidence is encountered can be infeasible, even if inference is tractable. In this paper, we introduce budgeted online collective inference, in which the MAP configuration of a graphical model is updated efficiently by revising the assignments to a subset of the variables while holding others fixed. The goal is to selectively update certain variables without sacrificing quality with respect to full inference. To formalize the consequences of partially updating inference, we introduce the concept of inference regret. We derive inference regret bounds for a class of graphical models with strongly-convex free energies. These theoretical insights, combined with a thorough analysis of the optimization solver, motivate new approximate methods for efficiently updating the variable assignments under a budget constraint. In experiments, we demonstrate that our algorithms can reduce inference time by 65% with accuracy comparable to full inference.",
"title": ""
},
{
"docid": "1c1f5159ab51923fcc4fef2fad501159",
"text": "This article assesses the consequences of poverty between a child's prenatal year and 5th birthday for several adult achievement, health, and behavior outcomes, measured as late as age 37. Using data from the Panel Study of Income Dynamics (1,589) and controlling for economic conditions in middle childhood and adolescence, as well as demographic conditions at the time of the birth, findings indicate statistically significant and, in some cases, quantitatively large detrimental effects of early poverty on a number of attainment-related outcomes (adult earnings and work hours). Early-childhood poverty was not associated with such behavioral measures as out-of-wedlock childbearing and arrests. Most of the adult earnings effects appear to operate through early poverty's association with adult work hours.",
"title": ""
},
{
"docid": "3ae6cb348cff49851cf15036483e2117",
"text": "Rate-Distortion Methods for Image and Video Compression: An. Or Laplacian p.d.f.s and optimal bit allocation techniques to ensure that bits.Rate-Distortion Methods for Image and Video Compression. Coding Parameters: chosen on input-by-input rampant caries pdf basis to optimize. In this article we provide an overview of rate-distortion R-D based optimization techniques and their practical application to image and video. Rate-distortion methods for image and video compression. Enter the password to open this PDF file.Bernd Girod: EE368b Image and Video Compression. Lower the bit-rate R by allowing some acceptable distortion. Consideration of a specific coding method. Bit-rate at least R.rate-distortion R-D based optimization techniques and their practical application to. Area of R-D optimized image and video coding see 1, 2 and many of the. Such Intra coding alone is in common use as ramones guitar tab pdf a video coding method today. MPEG-2: A step higher in bit rate, picture quality, and popularity.coding, rate distortion RD optimization, soft decision quantization SDQ. RD methods for video compression can be classified into two categories. Practical SDQ include without limitation SDQ in JPEG image coding and H. However, since we know that most lossy compression techniques operate on data. In image and video compression, the human perception models are less well. The conditional PDF QY Xy x that minimize rate for a given distortion D.The H. 264AVC video coding standard has been recently proposed by the Joint. MB which determine the overall rate and the distortion of the coded. Figure 2: The picture encoding process in the proposed method. Selection of λ and.fact, operational rate-distortion methods have come into wide use for image and video coders. In previous work, de Queiroz applied this technique to finding.",
"title": ""
},
{
"docid": "fdc875181fe37e6b469d07e0e580fadb",
"text": "Attention mechanism has recently attracted increasing attentions in the area of facial action unit (AU) detection. By finding the region of interest (ROI) of each AU with the attention mechanism, AU related local features can be captured. Most existing attention based AU detection works use prior knowledge to generate fixed attentions or refine the predefined attentions within a small range, which limits their capacity to model various AUs. In this paper, we propose a novel end-to-end weakly-supervised attention and relation learning framework for AU detection with only AU labels, which has not been explored before. In particular, multi-scale features shared by each AU are learned firstly, and then both channel-wise attentions and spatial attentions are learned to select and extract AU related local features. Moreover, pixellevel relations for AUs are further captured to refine spatial attentions so as to extract more relevant local features. Extensive experiments on BP4D and DISFA benchmarks demonstrate that our framework (i) outperforms the state-of-the-art methods for AU detection, and (ii) can find the ROI of each AU and capture the relations among AUs adaptively.",
"title": ""
},
{
"docid": "8be921cfab4586b6a19262da9a1637de",
"text": "Automatic segmentation of microscopy images is an important task in medical image processing and analysis. Nucleus detection is an important example of this task. Mask-RCNN is a recently proposed state-of-the-art algorithm for object detection, object localization, and object instance segmentation of natural images. In this paper we demonstrate that Mask-RCNN can be used to perform highly effective and efficient automatic segmentations of a wide range of microscopy images of cell nuclei, for a variety of cells acquired under a variety of conditions.",
"title": ""
},
{
"docid": "37a47bd2561b534d5734d250d16ff1c2",
"text": "Many chronic eye diseases can be conveniently investigated by observing structural changes in retinal blood vessel diameters. However, detecting changes in an accurate manner in face of interfering pathologies is a challenging task. The task is generally performed through an automatic computerized process. The literature shows that powerful methods have already been proposed to identify vessels in retinal images. Though a significant progress has been achieved toward methods to separate blood vessels from the uneven background, the methods still lack the necessary sensitivity to segment fine vessels. Recently, a multi-scale line-detector method proved its worth in segmenting thin vessels. This paper presents modifications to boost the sensitivity of this multi-scale line detector. First, a varying window size with line-detector mask is suggested to detect small vessels. Second, external orientations are fed to steer the multi-scale line detectors into alignment with flow directions. Third, optimal weights are suggested for weighted linear combinations of individual line-detector responses. Fourth, instead of using one global threshold, a hysteresis threshold is proposed to find a connected vessel tree. The overall impact of these modifications is a large improvement in noise removal capability of the conventional multi-scale line-detector method while finding more of the thin vessels. The contrast-sensitive steps are validated using a publicly available database and show considerable promise for the suggested strategy.",
"title": ""
}
] |
scidocsrr
|
e47bd221beaff11097993658b3c5926a
|
A Neural Network Approach for Knowledge-Driven Response Generation
|
[
{
"docid": "527d7c091cfc63c8e9d36afdd6b7bdfe",
"text": "Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.",
"title": ""
}
] |
[
{
"docid": "285fd0cdd988df78ac172640509b2cd3",
"text": "Self-assembly in swarm robotics is essential for a group of robots in achieving a common goal that is not possible to achieve by a single robot. Self-assembly also provides several advantages to swarm robotics. Some of these include versatility, scalability, re-configurability, cost-effectiveness, extended reliability, and capability for emergent phenomena. This work investigates the effect of self-assembly in evolutionary swarm robotics. Because of the lack of research literature within this paradigm, there are few comparisons of the different implementations of self-assembly mechanisms. This paper reports the influence of connection port configuration on evolutionary self-assembling swarm robots. The port configuration consists of the number and the relative positioning of the connection ports on each of the robot. Experimental results suggest that configuration of the connection ports can significantly impact the emergence of selfassembly in evolutionary swarm robotics.",
"title": ""
},
{
"docid": "5509b4a8e0a4b98795c2fc561f18d9c4",
"text": "A low-power variable-gain amplifier (VGA) based on transconductance (gm)-ratioed amplification is analyzed and designed with improved linearity. The VGA has the merits of continuous gain tuning, low power consumption and small chip area. However, the linearity performance of the gm-ratioed amplifier is usually poor. We analyze distortion in gm-ratioed amplifiers and propose to improve the output linearity by applying load degeneration technique. It is found that theoretically the output linearity can be improved by 8.5 dB at the same power consumption. We also analyze gain, bandwidth and noise performance of the gm-ratioed amplifiers. Two VGAs based on gm-ratioed amplification are designed and fabricated in a 0.18-μm CMOS process-one with load degeneration only and the other with both input and load degeneration. The VGA with load degeneration only achieves gain of -20 to 41 dB, bandwidth of 121 to 211 MHz, and input and output P1dB up to - 17 dBm and 0.65 dBm, respectively. The VGA with both input and load degeneration achieves gain of -37 to 28 dB, bandwidth of 76 to 809 MHz, and input and output P1dB up to - 2.63 dBm and 2.29 dBm, respectively. The two VGAs consume a similar amount of power, which is about 3 to 5 mW from a 1.8-V supply. For the same bias condition, the proposed load degeneration improves the output linearity by more than 15 dB.",
"title": ""
},
{
"docid": "6fb399cd455b28c713bdbd0ec20b0abf",
"text": "In this paper we present a broad overview of the last 40 years of research on cognitive architectures. To date, the number of existing architectures has reached several hundred, but most of the existing surveys do not reflect this growth and instead focus on a handful of well-established architectures. In this survey we aim to provide a more inclusive and high-level overview of the research on cognitive architectures. Our final set of 84 architectures includes 49 that are still actively developed, and borrow from a diverse set of disciplines, spanning areas from psychoanalysis to neuroscience. To keep the length of this paper within reasonable limits we discuss only the core cognitive abilities, such as perception, attention mechanisms, action selection, memory, learning, reasoning and metareasoning. In order to assess the breadth of practical applications of cognitive architectures we present information on over 900 practical projects implemented using the cognitive architectures in our list. We use various visualization techniques to highlight the overall trends in the development of the field. In addition to summarizing the current state-of-the-art in the cognitive architecture research, this survey describes a variety of methods and ideas that have been tried and their relative success in modeling human cognitive abilities, as well as which aspects of cognitive behavior need more research with respect to their mechanistic counterparts and thus can further inform how cognitive science might progress.",
"title": ""
},
{
"docid": "819195697309e48749e340a86dfc866d",
"text": "For the first time, a single source of cellulosic biomass was pretreated by leading technologies using identical analytical methods to provide comparative performance data. In particular, ammonia explosion, aqueous ammonia recycle, controlled pH, dilute acid, flowthrough, and lime approaches were applied to prepare corn stover for subsequent biological conversion to sugars through a Biomass Refining Consortium for Applied Fundamentals and Innovation (CAFI) among Auburn University, Dartmouth College, Michigan State University, the National Renewable Energy Laboratory, Purdue University, and Texas A&M University. An Agricultural and Industrial Advisory Board provided guidance to the project. Pretreatment conditions were selected based on the extensive experience of the team with each of the technologies, and the resulting fluid and solid streams were characterized using standard methods. The data were used to close material balances, and energy balances were estimated for all processes. The digestibilities of the solids by a controlled supply of cellulase enzyme and the fermentability of the liquids were also assessed and used to guide selection of optimum pretreatment conditions. Economic assessments were applied based on the performance data to estimate each pretreatment cost on a consistent basis. Through this approach, comparative data were developed on sugar recovery from hemicellulose and cellulose by the combined pretreatment and enzymatic hydrolysis operations when applied to corn stover. This paper introduces the project and summarizes the shared methods for papers reporting results of this research in this special edition of Bioresource Technology.",
"title": ""
},
{
"docid": "846f8f33181c3143bb8f54ce8eb3e5cc",
"text": "Story Point is a relative measure heavily used for agile estimation of size. The team decides how big a point is, and based on that size, determines how many points each work item is. In many organizations, the use of story points for similar features can vary from team to another, and successfully, based on the teams' sizes, skill set and relative use of this tool. But in a CMMI organization, this technique demands a degree of consistency across teams for a more streamlined approach to solution delivery. This generates a challenge for CMMI organizations to adopt Agile in software estimation and planning. In this paper, a process and methodology that guarantees relativity in software sizing while using agile story points is introduced. The proposed process and methodology are applied in a CMMI company level three on different projects. By that, the story point is used on the level of the organization, not the project. Then, the performance of sizing process is measured to show a significant improvement in sizing accuracy after adopting the agile story point in CMMI organizations. To complete the estimation cycle, an improvement in effort estimation dependent on story point is also introduced, and its performance effect is measured.",
"title": ""
},
{
"docid": "7867544be1b36ffab85b02c63cb03922",
"text": "In this paper a general theory of multistage decimators and interpolators for sampling rate reduction and sampling rate increase is presented. A set of curves and the necessary relations for optimally designing multistage decimators is also given. It is shown that the processes of decimation and interpolation are duals and therefore the same set of design curves applies to both problems. Further, it is shown that highly efficient implementations of narrow-band finite impulse response (FIR) fiiters can be obtained by cascading the processes of decimation and interpolation. Examples show that the efficiencies obtained are comparable to those of recursive elliptic filter designs.",
"title": ""
},
{
"docid": "934160b33f99886f9a72d0b871054101",
"text": "One of the common endeavours in engineering applications is outlier detection, which aims to identify inconsistent records from large amounts of data. Although outlier detection schemes in data mining discipline are acknowledged as a more viable solution to efficient identification of anomalies from these data repository, current outlier mining algorithms require the input of domain parameters. These parameters are often unknown, difficult to determine and vary across different datasets containing different cluster features. This paper presents a novel resolution-based outlier notion and a nonparametric outlier-mining algorithm, which can efficiently identify and rank top listed outliers from a wide variety of datasets. The algorithm generates reasonable outlier results by taking both local and global features of a dataset into account. Experiments are conducted using both synthetic datasets and a real life construction equipment dataset from a large road building contractor. Comparison with the current outlier mining algorithms indicates that the proposed algorithm is more effective and can be integrated into a decision support system to serve as a universal detector of potentially inconsistent records.",
"title": ""
},
{
"docid": "b23268140eb8e7fd2c0e23662ad23a9b",
"text": "It is shown that, under some minor restrictions, the functional behavior of radial basis function networks (RBFNs) and that of fuzzy inference systems are actually equivalent. This functional equivalence makes it possible to apply what has been discovered (learning rule, representational power, etc.) for one of the models to the other, and vice versa. It is of interest to observe that two models stemming from different origins turn out to be functionally equivalent.",
"title": ""
},
{
"docid": "af3297de35d49f774e2f31f31b09fd61",
"text": "This paper explores the phenomena of the emergence of the use of artificial intelligence in teaching and learning in higher education. It investigates educational implications of emerging technologies on the way students learn and how institutions teach and evolve. Recent technological advancements and the increasing speed of adopting new technologies in higher education are explored in order to predict the future nature of higher education in a world where artificial intelligence is part of the fabric of our universities. We pinpoint some challenges for institutions of higher education and student learning in the adoption of these technologies for teaching, learning, student support, and administration and explore further directions for research.",
"title": ""
},
{
"docid": "e97f494b2eed2b14e2d4c0fd80e38170",
"text": "We present a stochastic gradient descent optimisation method for image registration with adaptive step size prediction. The method is based on the theoretical work by Plakhov and Cruz (J. Math. Sci. 120(1):964–973, 2004). Our main methodological contribution is the derivation of an image-driven mechanism to select proper values for the most important free parameters of the method. The selection mechanism employs general characteristics of the cost functions that commonly occur in intensity-based image registration. Also, the theoretical convergence conditions of the optimisation method are taken into account. The proposed adaptive stochastic gradient descent (ASGD) method is compared to a standard, non-adaptive Robbins-Monro (RM) algorithm. Both ASGD and RM employ a stochastic subsampling technique to accelerate the optimisation process. Registration experiments were performed on 3D CT and MR data of the head, lungs, and prostate, using various similarity measures and transformation models. The results indicate that ASGD is robust to these variations in the registration framework and is less sensitive to the settings of the user-defined parameters than RM. The main disadvantage of RM is the need for a predetermined step size function. The ASGD method provides a solution for that issue.",
"title": ""
},
{
"docid": "48931b870057884b8b1c679781e2adc9",
"text": "Recommender systems have been researched extensively by the Technology Enhanced Learning (TEL) community during the last decade. By identifying suitable resources from a potentially overwhelming variety of choices, such systems offer a promising approach to facilitate both learning and teaching tasks. As learning is taking place in extremely diverse and rich environments, the incorporation of contextual information about the user in the recommendation process has attracted major interest. Such contextualization is researched as a paradigm for building intelligent systems that can better predict and anticipate the needs of users, and act more efficiently in response to their behavior. In this paper, we try to assess the degree to which current work in TEL recommender systems has achieved this, as well as outline areas in which further work is needed. First, we present a context framework that identifies relevant context dimensions for TEL applications. Then, we present an analysis of existing TEL recommender systems along these dimensions. Finally, based on our survey results, we outline topics on which further research is needed.",
"title": ""
},
{
"docid": "ac5c9087661d71880ce66f0551b133d6",
"text": "Cloud computing has become one of the key considerations both in academia and industry. Cheap, seemingly unlimited computing resources that can be allocated almost instantaneously and pay-as-you-go pricing schemes are some of the reasons for the success of Cloud computing. The Cloud computing landscape, however, is plagued by many issues hindering adoption. One such issue is vendor lock-in, forcing the Cloud users to adhere to one service provider in terms of data and application logic. Semantic Web has been an important research area that has seen significant attention from both academic and industrial researchers. One key property of Semantic Web is the notion of interoperability and portability through high level models. Significant work has been done in the areas of data modeling, matching, and transformations. The issues the Cloud computing community is facing now with respect to portability of data and application logic are exactly the same issue the Semantic Web community has been trying to address for some time. In this paper we present an outline of the use of well established semantic technologies to overcome the vendor lock-in issues in Cloud computing. We present a semantics-centric programming paradigm to create portable Cloud applications and discuss MobiCloud, our early attempt to implement the proposed approach.",
"title": ""
},
{
"docid": "8979f2a0e6db231b1363f764366e1d56",
"text": "In the current object detection field, one of the fastest algorithms is the Single Shot Multi-Box Detector (SSD), which uses a single convolutional neural network to detect the object in an image. Although SSD is fast, there is a big gap compared with the state-of-the-art on mAP. In this paper, we propose a method to improve SSD algorithm to increase its classification accuracy without affecting its speed. We adopt the Inception block to replace the extra layers in SSD, and call this method Inception SSD (I-SSD). The proposed network can catch more information without increasing the complexity. In addition, we use the batch-normalization (BN) and the residual structure in our I-SSD network architecture. Besides, we propose an improved non-maximum suppression method to overcome its deficiency on the expression ability of the model. The proposed I-SSD algorithm achieves 78.6% mAP on the Pascal VOC2007 test, which outperforms SSD algorithm while maintaining its time performance. We also construct an Outdoor Object Detection (OOD) dataset to testify the effectiveness of the proposed I-SSD on the platform of unmanned vehicles.",
"title": ""
},
{
"docid": "9e6899c27ea5ada89373c59617c57287",
"text": "In order to provide location information for indoor applications and context-aware computing, a lot of research is being done since last decade for development of real-time Indoor location system. In this paper, we have investigated indoor location concepts and have focused two major technologies used in many indoor location systems i.e. RF and ultrasonic. An overview of various RF systems that use different RF properties for location estimation has been given. Ultrasonic systems have been reviewed in detail as they provide low cost fine grained location systems. A few well known ultrasonic location systems have been investigated with a comparison of the system based on performance, accuracy and limitations.",
"title": ""
},
{
"docid": "d6a585443f5829b556a1064b9b92113a",
"text": "The water quality monitoring system is designed for the need of environmental protection department in a particular area of the water quality requirements. The system is based on the Wireless Sensor Network (WSN). It consists of Wireless Water Quality Monitoring Network and Remote Data Center. The hardware platform use wireless microprocessor CC2430 as the core of the node. The sensor network is builted in accordance with Zigbee wireless transmission agreement. WSN Sample the water quality, and send the data to Internet with the help of the GPRS DTU which has a built-in TCP/IP protocol. Through the Internet, Remote Data Center gets the real-time water quality data, and then analysis, process and record the data. Environmental protection department can provide real-time guidance to those industry which depends on regional water quality conditions, like industrial, plant and aquaculture. The most important is that the work can be more efficient and less cost.",
"title": ""
},
{
"docid": "0d6a276770da5e7e544f66256084ba75",
"text": "ARC AND PATH CONSISTENCY REVISITED' Roger Mohr and Thomas C. Henderson 2 CRIN BP 239 54506 Vandoeuvre (France)",
"title": ""
},
{
"docid": "216f97a97d240456d36ec765fd45739e",
"text": "This paper explores the growing trend of using mobile technology in university classrooms, exploring the use of tablets in particular, to identify learning benefits faced by students. Students, acting on their efficacy beliefs, make decisions regarding technology’s influence in improving their education. We construct a theoretical model in which internal and external factors affect a student’s self-efficacy which in turn affects the extent of adoption of a device for educational purposes. Through qualitative survey responses of university students who were given an Apple iPad to keep for the duration of a university course we find high levels of self-efficacy leading to positive views of the technology’s learning enhancement capabilities. Student observations on the practicality of the technology, off-topic use and its effects, communication, content, and perceived market advantage of using a tablet are also explored.",
"title": ""
},
{
"docid": "fddf6e71af23aba468989d6d09da989c",
"text": "The rapidly increasing pervasiveness and integration of computers in human society calls for a broad discipline under which this development can be studied. We argue that to design and use technology one needs to develop and use models of humans and machines in all their aspects, including cognitive and memory models, but also social influence and (artificial) emotions. We call this wider discipline Behavioural Computer Science (BCS), and argue in this paper for why BCS models should unify (models of) the behaviour of humans and machines when designing information and communication technology systems. Thus, one main point to be addressed is the incorporation of empirical evidence for actual human behaviour, instead of making inferences about behaviour based on the rational agent model. Empirical studies can be one effective way to constantly update the behavioural models. We are motivated by the future advancements in artificial intelligence which will give machines capabilities that from many perspectives will be indistinguishable from those of humans. Such machine behaviour would be studied using BCS models, looking at questions about machine trust like “Can a self driving car trust its passengers?”, or artificial influence like “Can the user interface adapt to the user’s behaviour, and thus influence this behaviour?”. We provide a few directions for approaching BCS, focusing on modelling of human and machine behaviour, as well as their interaction.",
"title": ""
},
{
"docid": "87320e6bc2d191bac9e23f4d56609fb6",
"text": "As security constraints are becoming more and more important, even for low-cost and low-power devices, new attacks and countermeasures are constantly proposed. Following this trend, Body Bias Injection (BBI) was introduced a few years ago. This new fault injection method consists in applying a high voltage pulse on the circuit substrate to induce faults. This paper presents an advanced evaluation bench allowing to perform BBI attacks with a good repeatability to evaluate the sensitivity of various circuits to this new threat. The moderate cost of this setup offers the opportunity for every electronic laboratory to use this new attack method and evaluate its effect on various devices. In addition, the physical effects of such attacks are described and a more accurate attack model is given.",
"title": ""
}
] |
scidocsrr
|
508518916728355dfc8cf4473600339e
|
Classification and Comparison of Range-Based Localization Techniques in Wireless Sensor Networks
|
[
{
"docid": "ef39b902bb50be657b3b9626298da567",
"text": "We consider the problem of node positioning in ad hoc networks. We propose a distributed, infrastructure-free positioning algorithm that does not rely on GPS (Global Positioning System). Instead, the algorithm uses the distances between the nodes to build a relative coordinate system in which the node positions are computed in two dimensions. Despite the distance measurement errors and the motion of the nodes, the algorithm provides sufficient location information and accuracy to support basic network functions. Examples of applications where this algorithm can be used include Location Aided Routing [10] and Geodesic Packet Forwarding [2]. Another example are sensor networks, where mobility is less of a problem. The main contribution of this work is to define and compute relative positions of the nodes in an ad hoc network without using GPS. We further explain how the proposed approach can be applied to wide area ad hoc networks.",
"title": ""
}
] |
[
{
"docid": "40b69a316255b26c77cfb37dee10c719",
"text": "Lake and Baroni (2018) recently introduced the SCAN data set, which consists of simple commands paired with action sequences and is intended to test the strong generalization abilities of recurrent sequence-to-sequence models. Their initial experiments suggested that such models may fail because they lack the ability to extract systematic rules. Here, we take a closer look at SCAN and show that it does not always capture the kind of generalization that it was designed for. To mitigate this we propose a complementary dataset, which requires mapping actions back to the original commands, called NACS. We show that models that do well on SCAN do not necessarily do well on NACS, and that NACS exhibits properties more closely aligned with realistic usecases for sequence-to-sequence models.",
"title": ""
},
{
"docid": "c73af0945ac35847c7a86a7f212b4d90",
"text": "We report a case of planned complex suicide (PCS) by a young man who had previously tried to commit suicide twice. He was found dead hanging by his neck, with a shot in his head. The investigation of the scene, the method employed, and previous attempts at suicide altogether pointed toward a suicidal etiology. The main difference between PCS and those cases defined in the medicolegal literature as combined suicides lies in the complex mechanism used by the victim as a protection against a failure in one of the mechanisms.",
"title": ""
},
{
"docid": "6f45bc16969ed9deb5da46ff8529bb8a",
"text": "In the future, mobile systems will increasingly feature more advanced organic light-emitting diode (OLED) displays. The power consumption of these displays is highly dependent on the image content. However, existing OLED power-saving techniques either change the visual experience of users or degrade the visual quality of images in exchange for a reduction in the power consumption. Some techniques attempt to enhance the image quality by employing a compound objective function. In this article, we present a win-win scheme that always enhances the image quality while simultaneously reducing the power consumption. We define metrics to assess the benefits and cost for potential image enhancement and power reduction. We then introduce algorithms that ensure the transformation of images into their quality-enhanced power-saving versions. Next, the win-win scheme is extended to process videos at a justifiable computational cost. All the proposed algorithms are shown to possess the win-win property without assuming accurate OLED power models. Finally, the proposed scheme is realized through a practical camera application and a video camcorder on mobile devices. The results of experiments conducted on a commercial tablet with a popular image database and on a smartphone with real-world videos are very encouraging and provide valuable insights for future research and practices.",
"title": ""
},
{
"docid": "9ad8a5b73430e4fe6b86d5fb8e2412b0",
"text": "We apply coset codes to adaptive modulation in fading channels. Adaptive modulation is a powerful technique to improve the energy efficiency and increase the data rate over a fading channel. Coset codes are a natural choice to use with adaptive modulation since the channel coding and modulation designs are separable. Therefore, trellis and lattice codes designed for additive white Gaussian noise (AWGN) channels can be superimposed on adaptive modulation for fading channels, with the same approximate coding gains. We first describe the methodology for combining coset codes with a general class of adaptive modulation techniques. We then apply this methodology to a spectrally efficient adaptive M -ary quadrature amplitude modulation (MQAM) to obtain trellis-coded adaptive MQAM. We present analytical and simulation results for this design which show an effective coding gain of 3 dB relative to uncoded adaptive MQAM for a simple four-state trellis code, and an effective 3.6-dB coding gain for an eight-state trellis code. More complex trellis codes are shown to achieve higher gains. We also compare the performance of trellis-coded adaptive MQAM to that of coded modulation with built-in time diversity and fixed-rate modulation. The adaptive method exhibits a power savings of up to 20 dB.",
"title": ""
},
{
"docid": "26f76aa41a64622ee8f0eaaed2aac529",
"text": "OBJECTIVE\nIn this study, we explored the impact of an occupational therapy wellness program on daily habits and routines through the perspectives of youth and their parents.\n\n\nMETHOD\nData were collected through semistructured interviews with children and their parents, the Pizzi Healthy Weight Management Assessment(©), and program activities.\n\n\nRESULTS\nThree themes emerged from the interviews: Program Impact, Lessons Learned, and Time as a Barrier to Health. The most common areas that both youth and parents wanted to change were time spent watching television and play, fun, and leisure time. Analysis of activity pie charts indicated that the youth considerably increased active time in their daily routines from Week 1 to Week 6 of the program.\n\n\nCONCLUSION\nAn occupational therapy program focused on health and wellness may help youth and their parents be more mindful of their daily activities and make health behavior changes.",
"title": ""
},
{
"docid": "efa066fc7ed815cc43a40c9c327b2cb3",
"text": "Induction surface hardening of parts with non-uniform cylindrical shape requires a multi-frequency process in order to obtain a uniform surface hardened depth. This paper presents an induction heating high power supply constituted of an only inverter circuit and a specially designed output resonant circuit. The whole circuit supplies both medium and high frequency power signals to the heating inductor simultaneously",
"title": ""
},
{
"docid": "d90a66cf63abdc1d0caed64812de7043",
"text": "BACKGROUND/AIMS\nEnd-stage liver disease accounts for one in forty deaths worldwide. Chronic infections with hepatitis B virus (HBV) and hepatitis C virus (HCV) are well-recognized risk factors for cirrhosis and liver cancer, but estimates of their contributions to worldwide disease burden have been lacking.\n\n\nMETHODS\nThe prevalence of serologic markers of HBV and HCV infections among patients diagnosed with cirrhosis or hepatocellular carcinoma (HCC) was obtained from representative samples of published reports. Attributable fractions of cirrhosis and HCC due to these infections were estimated for 11 WHO-based regions.\n\n\nRESULTS\nGlobally, 57% of cirrhosis was attributable to either HBV (30%) or HCV (27%) and 78% of HCC was attributable to HBV (53%) or HCV (25%). Regionally, these infections usually accounted for >50% of HCC and cirrhosis. Applied to 2002 worldwide mortality estimates, these fractions represent 929,000 deaths due to chronic HBV and HCV infections, including 446,000 cirrhosis deaths (HBV: n=235,000; HCV: n=211,000) and 483,000 liver cancer deaths (HBV: n=328,000; HCV: n=155,000).\n\n\nCONCLUSIONS\nHBV and HCV infections account for the majority of cirrhosis and primary liver cancer throughout most of the world, highlighting the need for programs to prevent new infections and provide medical management and treatment for those already infected.",
"title": ""
},
{
"docid": "e6c0aa517c857ed217fc96aad58d7158",
"text": "Conjoined twins, popularly known as Siamese twins, result from aberrant embryogenesis [1]. It is a rare presentation with an incidence of 1 in 50,000 births. Since 60% of these cases are still births, so the true incidence is estimated to be approximately 1 in 200,000 births [2-4]. This disorder is more common in females with female to male ratio of 3:1 [5]. Conjoined twins are classified based on their site of attachment with a suffix ‘pagus’ which is a Greek term meaning “fixed”. The main types of conjoined twins are omphalopagus (abdomen), thoracopagus (thorax), cephalopagus (ventrally head to umbilicus), ischipagus (pelvis), parapagus (laterally body side), craniopagus (head), pygopagus (sacrum) and rachipagus (vertebral column) [6]. Cephalophagus is an extremely rare variant of conjoined twins with an incidence of 11% among all cases. These types of twins are fused at head, thorax and upper abdominal cavity. They are pre-dominantly of two types: Janiceps (two faces are on the either side of the head) or non Janiceps type (normal single head and face). We hereby report a case of non janiceps cephalopagus conjoined twin, which was diagnosed after delivery.",
"title": ""
},
{
"docid": "c0d7cd54a947d9764209e905a6779d45",
"text": "The mainstream approach to protecting the location-privacy of mobile users in location-based services (LBSs) is to alter the users' actual locations in order to reduce the location information exposed to the service provider. The location obfuscation algorithm behind an effective location-privacy preserving mechanism (LPPM) must consider three fundamental elements: the privacy requirements of the users, the adversary's knowledge and capabilities, and the maximal tolerated service quality degradation stemming from the obfuscation of true locations. We propose the first methodology, to the best of our knowledge, that enables a designer to find the optimal LPPM for a LBS given each user's service quality constraints against an adversary implementing the optimal inference algorithm. Such LPPM is the one that maximizes the expected distortion (error) that the optimal adversary incurs in reconstructing the actual location of a user, while fulfilling the user's service-quality requirement. We formalize the mutual optimization of user-adversary objectives (location privacy vs. correctness of localization) by using the framework of Stackelberg Bayesian games. In such setting, we develop two linear programs that output the best LPPM strategy and its corresponding optimal inference attack. Our optimal user-centric LPPM can be easily integrated in the users' mobile devices they use to access LBSs. We validate the efficacy of our game theoretic method against real location traces. Our evaluation confirms that the optimal LPPM strategy is superior to a straightforward obfuscation method, and that the optimal localization attack performs better compared to a Bayesian inference attack.",
"title": ""
},
{
"docid": "4e75d06e1e23cf8efdcafd2f59a0313f",
"text": "The International Solid-State Circuits Conference (ISSCC) is the flagship conference of the IEEE Solid-State Circuits Society. This year, for the 65th ISSCC, the theme is \"Silicon Engineering a Social World.\" Continued advances in solid-state circuits and systems have brought ever-more powerful communication and computational capabilities into mobile form factors. Such ubiquitous smart devices lie at the heart of a revolution shaping how we connect, collaborate, build relationships, and share information. These social technologies allow people to maintain connections and support networks not otherwise possible; they provide the ability to access information instantaneously and from any location, thereby helping to shape world events and culture, empowering citizens of all nations, and creating social networks that allow worldwide communities to develop and form bonds based on common interests.",
"title": ""
},
{
"docid": "107b95c3bb00c918c73d82dd678e46c0",
"text": "Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).",
"title": ""
},
{
"docid": "8bd78fe6aa4aab1476f0599acba64181",
"text": "The denoising step for Computed Tomography (CT) images is an important challenge in the medical image processing. These images are degraded by low resolution and noise. In this paper, we propose a new method for 3D CT denoising based on Coherence Enhancing Diffusion model. Quantitative measures such as PSNR, SSIM and RMSE are computed to a phantom CT image in order to improve the efficiently of our proposed model, compared to a number of denoising algorithms. Furthermore, experimental results on a real 3D CT data show that this approach is effective and promising in removing noise and preserving details.",
"title": ""
},
{
"docid": "9c8daaa2770a109604988700e4eaca27",
"text": "In this paper, the neural-network-based robust optimal control design for a class of uncertain nonlinear systems via adaptive dynamic programming approach is investigated. First, the robust controller of the original uncertain system is derived by adding a feedback gain to the optimal controller of the nominal system. It is also shown that this robust controller can achieve optimality under a specified cost function, which serves as the basic idea of the robust optimal control design. Then, a critic network is constructed to solve the Hamilton– Jacobi–Bellman equation corresponding to the nominal system, where an additional stabilizing term is introduced to verify the stability. The uniform ultimate boundedness of the closed-loop system is also proved by using the Lyapunov approach. Moreover, the obtained results are extended to solve decentralized optimal control problem of continuous-time nonlinear interconnected large-scale systems. Finally, two simulation examples are presented to illustrate the effectiveness of the established control scheme. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "046bcb0a39184bdf5a97dba120d8ba0f",
"text": "Finishing 90-epoch ImageNet-1k training with ResNet-50 on a NVIDIA M40 GPU takes 14 days. This training requires 10 single precision operations in total. On the other hand, the world’s current fastest supercomputer can finish 2× 10 single precision operations per second (Dongarra et al. 2017). If we can make full use of the supercomputer for DNN training, we should be able to finish the 90-epoch ResNet-50 training in five seconds. However, the current bottleneck for fast DNN training is in the algorithm level. Specifically, the current batch size (e.g. 512) is too small to make efficient use of many processors For large-scale DNN training, we focus on using large-batch data-parallelism synchronous SGD without losing accuracy in the fixed epochs. The LARS algorithm (You, Gitman, and Ginsburg 2017) enables us to scale the batch size to extremely large case (e.g. 32K). We finish the 100-epoch ImageNet training with AlexNet in 24 minutes. Same as Facebook’s result (Goyal et al. 2017), we finish the 90-epoch ImageNet training with ResNet-50 in one hour by 512 Intel KNLs.",
"title": ""
},
{
"docid": "9157266c7dea945bf5a68f058836e681",
"text": "For the task of implicit discourse relation recognition, traditional models utilizing manual features can suffer from data sparsity problem. Neural models provide a solution with distributed representations, which could encode the latent semantic information, and are suitable for recognizing semantic relations between argument pairs. However, conventional vector representations usually adopt embeddings at the word level and cannot well handle the rare word problem without carefully considering morphological information at character level. Moreover, embeddings are assigned to individual words independently, which lacks of the crucial contextual information. This paper proposes a neural model utilizing context-aware character-enhanced embeddings to alleviate the drawbacks of the current word level representation. Our experiments show that the enhanced embeddings work well and the proposed model obtains state-of-the-art results.",
"title": ""
},
{
"docid": "239e3f68b790bed3f8e9ec28f99f91d4",
"text": "This study evaluated the structure and validity of the Problem Behavior Frequency Scale-Teacher Report Form (PBFS-TR) for assessing students' frequency of specific forms of aggression and victimization, and positive behavior. Analyses were conducted on two waves of data from 727 students from two urban middle schools (Sample 1) who were rated by their teachers on the PBFS-TR and the Social Skills Improvement System (SSIS), and on data collected from 1,740 students from three urban middle schools (Sample 2) for whom data on both the teacher and student report version of the PBFS were obtained. Confirmatory factor analyses supported first-order factors representing 3 forms of aggression (physical, verbal, and relational), 3 forms of victimization (physical, verbal and relational), and 2 forms of positive behavior (prosocial behavior and effective nonviolent behavior), and higher-order factors representing aggression, victimization, and positive behavior. Strong measurement invariance was established over gender, grade, intervention condition, and time. Support for convergent validity was found based on correlations between corresponding scales on the PBFS-TR and teacher ratings on the SSIS in Sample 1. Significant correlations were also found between teacher ratings on the PBFS-TR and student ratings of their behavior on the Problem Behavior Frequency Scale-Adolescent Report (PBFS-AR) and a measure of nonviolent behavioral intentions in Sample 2. Overall the findings provided support for the PBFS-TR and suggested that teachers can provide useful data on students' aggressive and prosocial behavior and victimization experiences within the school setting. (PsycINFO Database Record (c) 2018 APA, all rights reserved).",
"title": ""
},
{
"docid": "74beaea9eccab976dc1ee7b2ddf3e4ca",
"text": "We develop theory that distinguishes trust among employees in typical task contexts (marked by low levels of situational unpredictability and danger) from trust in “highreliability” task contexts (those marked by high levels of situational unpredictability and danger). A study of firefighters showed that trust in high-reliability task contexts was based on coworkers’ integrity, whereas trust in typical task contexts was also based on benevolence and identification. Trust in high-reliability contexts predicted physical symptoms, whereas trust in typical contexts predicted withdrawal. Job demands moderated linkages with performance: trust in high-reliability task contexts was a more positive predictor of performance when unpredictable and dangerous calls were more frequent.",
"title": ""
},
{
"docid": "f519d349d928e7006955943043ab0eae",
"text": "A critical application of metabolomics is the evaluation of tissues, which are often the primary sites of metabolic dysregulation in disease. Laboratory rodents have been widely used for metabolomics studies involving tissues due to their facile handing, genetic manipulability and similarity to most aspects of human metabolism. However, the necessary step of administration of anesthesia in preparation for tissue sampling is not often given careful consideration, in spite of its potential for causing alterations in the metabolome. We examined, for the first time using untargeted and targeted metabolomics, the effect of several commonly used methods of anesthesia and euthanasia for collection of skeletal muscle, liver, heart, adipose and serum of C57BL/6J mice. The data revealed dramatic, tissue-specific impacts of tissue collection strategy. Among many differences observed, post-euthanasia samples showed elevated levels of glucose 6-phosphate and other glycolytic intermediates in skeletal muscle. In heart and liver, multiple nucleotide and purine degradation metabolites accumulated in tissues of euthanized compared to anesthetized animals. Adipose tissue was comparatively less affected by collection strategy, although accumulation of lactate and succinate in euthanized animals was observed in all tissues. Among methods of tissue collection performed pre-euthanasia, ketamine showed more variability compared to isoflurane and pentobarbital. Isoflurane induced elevated liver aspartate but allowed more rapid initiation of tissue collection. Based on these findings, we present a more optimal collection strategy mammalian tissues and recommend that rodent tissues intended for metabolomics studies be collected under anesthesia rather than post-euthanasia.",
"title": ""
},
{
"docid": "4d902e421b6371fc40b6d7178d69426e",
"text": "Recently, Social media has arisen not only as a personal communication media, but also, as a media to communicate opinions about products and services or even political and general events among its users. Due to its widespread and popularity, there is a massive amount of user reviews or opinions produced and shared daily. Twitter is one of the most widely used social media micro blogging sites. Mining user opinions from social media data is not a straight forward task; it can be accomplished in different ways. In this work, an open source approach is presented, throughout which, twitter Microblogs data has been collected, pre-processed, analyzed and visualized using open source tools to perform text mining and sentiment analysis for analyzing user contributed online reviews about two giant retail stores in the UK namely Tesco and Asda stores over Christmas period 2014. Collecting customer opinions can be expensive and time consuming task using conventional methods such as surveys. The sentiment analysis of the customer opinions makes it easier for businesses to understand their competitive value in a changing market and to understand their customer views about their products and services, which also provide an insight into future marketing strategies and decision making policies.",
"title": ""
},
{
"docid": "96d5a0fb4bb0666934819d162f1b060c",
"text": "Human gait is an important indicator of health, with applications ranging from diagnosis, monitoring, and rehabilitation. In practice, the use of gait analysis has been limited. Existing gait analysis systems are either expensive, intrusive, or require well-controlled environments such as a clinic or a laboratory. We present an accurate gait analysis system that is economical and non-intrusive. Our system is based on the Kinect sensor and thus can extract comprehensive gait information from all parts of the body. Beyond standard stride information, we also measure arm kinematics, demonstrating the wide range of parameters that can be extracted. We further improve over existing work by using information from the entire body to more accurately measure stride intervals. Our system requires no markers or battery-powered sensors, and instead relies on a single, inexpensive commodity 3D sensor with a large preexisting install base. We suggest that the proposed technique can be used for continuous gait tracking at home.",
"title": ""
}
] |
scidocsrr
|
108eb06bba679458650bcfb0ceedd835
|
Making machine learning models interpretable
|
[
{
"docid": "be9cea5823779bf5ced592f108816554",
"text": "Undoubtedly, bioinformatics is one of the fastest developing scientific disciplines in recent years. Bioinformatics is the development and application of computer methods for management, analysis, interpretation, and prediction, as well as for the design of experiments. There is already a significant number of books on bioinformatics. Some are introductory and require almost no prior experience in biology or computer science: “Bioinformatics Basics Applications in Biological Science and Medicine” and “Introduction to Bioinformatics.” Others are targeted to biologists entering the field of bioinformatics: “Developing Bioinformatics Computer Skills.” Some more specialized books are: “An Introduction to Support Vector Machines : And Other Kernel-Based Learning Methods”, “Biological Sequence Analysis : Probabilistic Models of Proteins and Nucleic Acids”, “Pattern Discovery in Bimolecular Data : Tools, Techniques, and Applications”, “Computational Molecular Biology: An Algorithmic Approach.” The book subject of this review has a broad scope. “Bioinformatics: The machine learning approach” is aimed at two types of researchers and students. First are the biologists and biochemists who need to understand new data-driven algorithms, such as neural networks and hidden Markov",
"title": ""
}
] |
[
{
"docid": "e755e96c2014100a69e4a962d6f75fb5",
"text": "We propose a material acquisition approach to recover the spatially-varying BRDF and normal map of a near-planar surface from a single image captured by a handheld mobile phone camera. Our method images the surface under arbitrary environment lighting with the flash turned on, thereby avoiding shadows while simultaneously capturing highfrequency specular highlights. We train a CNN to regress an SVBRDF and surface normals from this image. Our network is trained using a large-scale SVBRDF dataset and designed to incorporate physical insights for material estimation, including an in-network rendering layer to model appearance and a material classifier to provide additional supervision during training. We refine the results from the network using a dense CRF module whose terms are designed specifically for our task. The framework is trained end-to-end and produces high quality results for a variety of materials. We provide extensive ablation studies to evaluate our network on both synthetic and real data, while demonstrating significant improvements in comparisons with prior works.",
"title": ""
},
{
"docid": "559a4175347e5fea57911d9b8c5080e6",
"text": "Online social networks offering various services have become ubiquitous in our daily life. Meanwhile, users nowadays are usually involved in multiple online social networks simultaneously to enjoy specific services provided by different networks. Formally, social networks that share some common users are named as partially aligned networks. In this paper, we want to predict the formation of social links in multiple partially aligned social networks at the same time, which is formally defined as the multi-network link (formation) prediction problem. In multiple partially aligned social networks, users can be extensively correlated with each other by various connections. To categorize these diverse connections among users, 7 \"intra-network social meta paths\" and 4 categories of \"inter-network social meta paths\" are proposed in this paper. These \"social meta paths\" can cover a wide variety of connection information in the network, some of which can be helpful for solving the multi-network link prediction problem but some can be not. To utilize useful connection, a subset of the most informative \"social meta paths\" are picked, the process of which is formally defined as \"social meta path selection\" in this paper. An effective general link formation prediction framework, Mli (Multi-network Link Identifier), is proposed in this paper to solve the multi-network link (formation) prediction problem. Built with heterogenous topological features extracted based on the selected \"social meta paths\" in the multiple partially aligned social networks, Mli can help refine and disambiguate the prediction results reciprocally in all aligned networks. Extensive experiments conducted on real-world partially aligned heterogeneous networks, Foursquare and Twitter, demonstrate that Mli can solve the multi-network link prediction problem very well.",
"title": ""
},
{
"docid": "17c49edf5842fb918a3bd4310d910988",
"text": "In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the image boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.",
"title": ""
},
{
"docid": "c495fadfd4c3e17948e71591e84c3398",
"text": "A real-time, digital algorithm for pulse width modulation (PWM) with distortion-free baseband is developed in this paper. The algorithm not only eliminates the intrinsic baseband distortion of digital PWM but also avoids the appearance of side-band components of the carrier in the baseband even for low switching frequencies. Previous attempts to implement digital PWM with these spectral properties required several processors due to their complexity; the proposed algorithm uses only several FIR filters and a few multiplications and additions and therefore is implemented in real time on a standard DSP. The performance of the algorithm is compared with that of uniform, double-edge PWM modulator via experimental measurements for several bandlimited modulating signals.",
"title": ""
},
{
"docid": "93f0026a850a620ecabafdbfec3abb72",
"text": "Knet (pronounced \"kay-net\") is the Koç University machine learning framework implemented in Julia, a high-level, high-performance, dynamic programming language. Unlike gradient generating compilers like Theano and TensorFlow which restrict users into a modeling mini-language, Knet allows models to be defined by just describing their forward computation in plain Julia, allowing the use of loops, conditionals, recursion, closures, tuples, dictionaries, array indexing, concatenation and other high level language features. High performance is achieved by combining automatic differentiation of most of Julia with efficient GPU kernels and memory management. Several examples and benchmarks are provided to demonstrate that GPU support and automatic differentiation of a high level language are sufficient for concise definition and efficient training of sophisticated models.",
"title": ""
},
{
"docid": "46df05f01a027359f23d4de2396e2586",
"text": "Dialog act identification plays an important role in understanding conversations. It has been widely applied in many fields such as dialogue systems, automatic machine translation, automatic speech recognition, and especially useful in systems with human-computer natural language dialogue interfaces such as virtual assistants and chatbots. The first step of identifying dialog act is identifying the boundary of the dialog act in utterances. In this paper, we focus on segmenting the utterance according to the dialog act boundaries, i.e. functional segments identification, for Vietnamese utterances. We investigate carefully functional segment identification in two approaches: (1) machine learning approach using maximum entropy (ME) and conditional random fields (CRFs); (2) deep learning approach using bidirectional Long Short-Term Memory (LSTM) with a CRF layer (Bi-LSTM-CRF) on two different conversational datasets: (1) Facebook messages (Message data); (2) transcription from phone conversations (Phone data). To the best of our knowledge, this is the first work that applies deep learning based approach to dialog act segmentation. As the results show, deep learning approach performs appreciably better as to compare with traditional machine learning approaches. Moreover, it is also the first study that tackles dialog act and functional segment identification for Vietnamese.",
"title": ""
},
{
"docid": "f66c9aa537630fdbff62d8d49205123b",
"text": "This workshop will explore community based repositories for educational data and analytic tools that are used to connect researchers and reduce the barriers to data sharing. Leading innovators in the field, as well as attendees, will identify and report on bottlenecks that remain toward our goal of a unified repository. We will discuss these as well as possible solutions. We will present LearnSphere, an NSF funded system that supports collaborating on and sharing a wide variety of educational data, learning analytics methods, and visualizations while maintaining confidentiality. We will then have hands-on sessions in which attendees have the opportunity to apply existing learning analytics workflows to their choice of educational datasets in the repository (using a simple drag-and-drop interface), add their own learning analytics workflows (requires very basic coding experience), or both. Leaders and attendees will then jointly discuss the unique benefits as well as the limitations of these solutions. Our goal is to create building blocks to allow researchers to integrate their data and analysis methods with others, in order to advance the future of learning science.",
"title": ""
},
{
"docid": "506a6a98e87fb5a6dc7e5cbe9cf27262",
"text": "Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex innerand cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large innerand cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process. Source (GTA5) Target (BDD) Figure 1: Exemplar guided image translation examples of GTA5→ BDD. Best viewed in color.",
"title": ""
},
{
"docid": "46829dde25c66191bcefae3614c2dd3f",
"text": "User-generated content (UGC) on the Web, especially on social media platforms, facilitates the association of additional information with digital resources; thus, it can provide valuable supplementary content. However, UGC varies in quality and, consequently, raises the challenge of how to maximize its utility for a variety of end-users. This study aims to provide researchers and Web data curators with comprehensive answers to the following questions: What are the existing approaches and methods for assessing and ranking UGC? What features and metrics have been used successfully to assess and predict UGC value across a range of application domains? What methods can be effectively employed to maximize that value? This survey is composed of a systematic review of approaches for assessing and ranking UGC: results are obtained by identifying and comparing methodologies within the context of short text-based UGC on the Web. Existing assessment and ranking approaches adopt one of four framework types: the community-based framework takes into consideration the value assigned to content by a crowd of humans, the end-user--based framework adapts and personalizes the assessment and ranking process with respect to a single end-user, the designer-based framework encodes the software designer’s values in the assessment and ranking method, and the hybrid framework employs methods from more than one of these types. This survey suggests a need for further experimentation and encourages the development of new approaches for the assessment and ranking of UGC.",
"title": ""
},
{
"docid": "6c3f80b453d51e364eca52656ed54e62",
"text": "Despite substantial recent research activity related to continuous delivery and deployment (CD), there has not yet been a systematic, empirical study on how the practices often associated with continuous deployment have found their way into the broader software industry. This raises the question to what extent our knowledge of the area is dominated by the peculiarities of a small number of industrial leaders, such as Facebook. To address this issue, we conducted a mixed-method empirical study, consisting of a pre-study on literature, qualitative interviews with 20 software developers or release engineers with heterogeneous backgrounds, and a Web-based quantitative survey that attracted 187 complete responses. A major trend in the results of our study is that architectural issues are currently one of the main barriers for CD adoption. Further, feature toggles as an implementation technique for partial rollouts lead to unwanted complexity, and require research on better abstractions and modelling techniques for runtime variability. Finally, we conclude that practitioners are in need for more principled approaches to release decision making, e.g., which features to conduct A/B tests on, or which metrics to evaluate.",
"title": ""
},
{
"docid": "52212ff3e1c85b5f5c3fcf0ec71f6f8b",
"text": "Embodied cognition theory proposes that individuals' abstract concepts can be associated with sensorimotor processes. The authors examined the effects of teaching participants novel embodied metaphors, not based in prior physical experience, and found evidence suggesting that they lead to embodied simulation, suggesting refinements to current models of embodied cognition. Creating novel embodiments of abstract concepts in the laboratory may be a useful method for examining mechanisms of embodied cognition.",
"title": ""
},
{
"docid": "712cd41c525b6632a7a5c424173d6f1e",
"text": "The use of 3-D multicellular spheroid (MCS) models is increasingly being accepted as a viable means to study cell-cell, cell-matrix and cell-drug interactions. Behavioral differences between traditional monolayer (2-D) cell cultures and more recent 3-D MCS confirm that 3-D MCS more closely model the in vivo environment. However, analyzing the effect of pharmaceutical agents on both monolayer cultures and MCS is very time intensive. This paper reviews the use of electrical impedance spectroscopy (EIS), a label-free whole cell assay technique, as a tool for automated screening of cell drug interactions in MCS models for biologically/physiologically relevant events over long periods of time. EIS calculates the impedance of a sample by applying an AC current through a range of frequencies and measuring the resulting voltage. This review will introduce techniques used in impedance-based analysis of 2-D systems; highlight recently developed impedance-based techniques for analyzing 3-D cell cultures; and discuss applications of 3-D culture impedance monitoring systems.",
"title": ""
},
{
"docid": "cc92787280db22c46a159d95f6990473",
"text": "A novel formulation for the voltage waveforms in high efficiency linear power amplifiers is described. This formulation demonstrates that a constant optimum efficiency and output power can be obtained over a continuum of solutions by utilizing appropriate harmonic reactive impedance terminations. A specific example is confirmed experimentally. This new formulation has some important implications for the possibility of realizing broadband >10% high efficiency linear RF power amplifiers.",
"title": ""
},
{
"docid": "ef26995e3979f479f4c3628283816d5d",
"text": "This article addresses the position taken by Clark (1983) that media do not influence learning under any conditions. The article reframes the questions raised by Clark to explore the conditions under which media will influence learning. Specifically, it posits the need to consider the capabilities of media, and the methods that employ them, as they interact with the cognitive and social processes by which knowledge is constructed. This approach is examined within the context of two major media-based projects, one which uses computers and the other,video. The article discusses the implications of this approach for media theory, research and practice.",
"title": ""
},
{
"docid": "55a0fb2814fde7890724a137fc414c88",
"text": "Quantitative structure-activity relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists toward collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making.",
"title": ""
},
{
"docid": "00223ccf5b5aebfc23c76afb7192e3f7",
"text": "Computer Security System / technology have passed through several changes. The trends have been from what you know (e.g. password, PIN, etc) to what you have (ATM card, Driving License, etc) and presently to who you are (Biometry) or combinations of two or more of the trios. This technology (biometry) has come to solve the problems identified with knowledge-based and token-based authentication systems. It is possible to forget your password and what you have can as well be stolen. The security of determining who you are is referred to as BIOMETRIC. Biometric, in a nutshell, is the use of your body as password. This paper explores the various methods of biometric identification that have evolved over the years and the features used for each modality.",
"title": ""
},
{
"docid": "a7618e1370db3fca4262f8d36979aa91",
"text": "Generative Adversarial Network (GAN) has been shown to possess the capability to learn distributions of data, given infinite capacity of models [1, 2]. Empirically, approximations with deep neural networks seem to have “sufficiently large” capacity and lead to several success in many applications, such as image generation. However, most of the results are difficult to evaluate because of the curse of dimensionality and the unknown distribution of the data. To evaluate GANs, in this paper, we consider simple one-dimensional data coming from parametric distributions circumventing the aforementioned problems. We formulate rigorous techniques for evaluation under this setting. Based on this evaluation, we find that many state-ofthe-art GANs are very difficult to train to learn the true distribution and can usually only find some of the modes. If the GAN has learned, such as MMD GAN, we observe it has some generalization capabilities.",
"title": ""
},
{
"docid": "82865170278997209a650aa8be483703",
"text": "This paper presents a novel dataset for traffic accidents analysis. Our goal is to resolve the lack of public data for research about automatic spatio-temporal annotations for traffic safety in the roads. Through the analysis of the proposed dataset, we observed a significant degradation of object detection in pedestrian category in our dataset, due to the object sizes and complexity of the scenes. To this end, we propose to integrate contextual information into conventional Faster R-CNN using Context Mining (CM) and Augmented Context Mining (ACM) to complement the accuracy for small pedestrian detection. Our experiments indicate a considerable improvement in object detection accuracy: +8.51% for CM and +6.20% for ACM. Finally, we demonstrate the performance of accident forecasting in our dataset using Faster R-CNN and an Accident LSTM architecture. We achieved an average of 1.684 seconds in terms of Time-To-Accident measure with an Average Precision of 47.25%. Our Webpage for the paper is https:",
"title": ""
},
{
"docid": "1c8ac344f85ff4d4a711536841168b6a",
"text": "Internet Protocol Television (IPTV) is an increasingly popular multimedia service which is used to deliver television, video, audio and other interactive content over proprietary IP-based networks. Video on Demand (VoD) is one of the most popular IPTV services, and is very important for IPTV providers since it represents the second most important revenue stream after monthly subscriptions. In addition to high-quality VoD content, profitable VoD service provisioning requires an enhanced content accessibility to greatly improve end-user experience. Moreover, it is imperative to offer innovative features to attract new customers and retain existing ones. To achieve this goal, IPTV systems typically employ VoD recommendation engines to offer personalized lists of VoD items that are potentially interesting to a user from a large amount of available titles. In practice, a good recommendation engine does not offer popular and well-known titles, but is rather able to identify interesting among less popular items which would otherwise be hard to find. In this paper we report our experience in building a VoD recommendation system. The presented evaluation shows that our recommendation system is able to recommend less popular items while operating under a high load of end-user requests.",
"title": ""
},
{
"docid": "97065954a10665dee95977168b9e6c60",
"text": "We describe the current status of Pad++, a zooming graphical interface that we are exploring as an alternative to traditional window and icon-based approaches to interface design. We discuss the motivation for Pad++, describe the implementation, and present prototype applications. In addition, we introduce an informational physics strategy for interface design and briefly compare it with metaphor-based design strategies.",
"title": ""
}
] |
scidocsrr
|
8444760ba8bd035fa3fc36a4d3d7fc61
|
Low Cost Self-assistive Voice Controlled Technology for Disabled People
|
[
{
"docid": "802d66fda1701252d1addbd6d23f6b4c",
"text": "Powered wheelchair users often struggle to drive safely and effectively and, in more critical cases, can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists users as and when they require help. The system uses a multiple-hypothesis method to predict the driver's intentions and, if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance but also, perhaps more importantly, characterize the user performance in an experiment that combines eye tracking with a secondary task. Without assistance, participants experienced multiple collisions while driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely but also they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain-machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input.",
"title": ""
}
] |
[
{
"docid": "7dc9afa44cc609a658b11a949829e2b9",
"text": "To achieve security in wireless sensor networks, it is important to he able to encrypt messages sent among sensor nodes. Keys for encryption purposes must he agreed upon by communicating nodes. Due to resource constraints, achieving such key agreement in wireless sensor networks is nontrivial. Many key agreement schemes used in general networks, such as Diffie-Hellman and public-key based schemes, are not suitable for wireless sensor networks. Pre-distribution of secret keys for all pairs of nodes is not viable due to the large amount of memory used when the network size is large. Recently, a random key pre-distribution scheme and its improvements have been proposed. A common assumption made by these random key pre-distribution schemes is that no deployment knowledge is available. Noticing that in many practical scenarios, certain deployment knowledge may be available a priori, we propose a novel random key pre-distribution scheme that exploits deployment knowledge and avoids unnecessary key assignments. We show that the performance (including connectivity, memory usage, and network resilience against node capture) of sensor networks can he substantially improved with the use of our proposed scheme. The scheme and its detailed performance evaluation are presented in this paper.",
"title": ""
},
{
"docid": "030c8aeb4e365bfd2fdab710f8c9f598",
"text": "By combining linear graph theory with the principle of virtual work, a dynamic formulation is obtained that extends graph-theoretic modelling methods to the analysis of exible multibody systems. The system is represented by a linear graph, in which nodes represent reference frames on rigid and exible bodies, and edges represent components that connect these frames. By selecting a spanning tree for the graph, the analyst can choose the set of coordinates appearing in the nal system of equations. This set can include absolute, joint, or elastic coordinates, or some combination thereof. If desired, all non-working constraint forces and torques can be automatically eliminated from the dynamic equations by exploiting the properties of virtual work. The formulation has been implemented in a computer program, DynaFlex, that generates the equations of motion in symbolic form. Three examples are presented to demonstrate the application of the formulation, and to validate the symbolic computer implementation.",
"title": ""
},
{
"docid": "7394f3000da8af0d4a2b33fed4f05264",
"text": "We often base our decisions on uncertain data - for instance, when consulting the weather forecast before deciding what to wear. Due to their uncertainty, such forecasts can differ by provider. To make an informed decision, many people compare several forecasts, which is a time-consuming and cumbersome task. To facilitate comparison, we identified three aggregation mechanisms for forecasts: manual comparison and two mechanisms of computational aggregation. In a survey, we compared the mechanisms using different representations. We then developed a weather application to evaluate the most promising candidates in a real-world study. Our results show that aggregation increases users' confidence in uncertain data, independent of the type of representation. Further, we find that for daily events, users prefer to use computationally aggregated forecasts. However, for high-stakes events, they prefer manual comparison. We discuss how our findings inform the design of improved interfaces for comparison of uncertain data, including non-weather purposes.",
"title": ""
},
{
"docid": "1644d83b83383bffbd01b0ae83c3836c",
"text": "The dysregulation of inflammatory responses and of immune self-tolerance is considered to be a key element in the autoreactive immune response in multiple sclerosis (MS). Regulatory T (TREG) cells have emerged as crucial players in the pathogenetic scenario of CNS autoimmune inflammation. Targeted deletion of TREG cells causes spontaneous autoimmune disease in mice, whereas augmentation of TREG-cell function can prevent the development of or alleviate variants of experimental autoimmune encephalomyelitis, the animal model of MS. Recent findings indicate that MS itself is also accompanied by dysfunction or impaired maturation of TREG cells. The development and function of TREG cells is closely linked to dendritic cells (DCs), which have a central role in the activation and reactivation of encephalitogenic cells in the CNS. DCs and TREG cells have an intimate bidirectional relationship, and, in combination with other factors and cell types, certain types of DCs are capable of inducing TREG cells. Consequently, TREG cells and DCs have been recognized as potential therapeutic targets in MS. This Review compiles the current knowledge on the role and function of various subsets of TREG cells in MS and experimental autoimmune encephalomyelitis. We also highlight the role of tolerogenic DCs and their bidirectional interaction with TREG cells during CNS autoimmunity.",
"title": ""
},
{
"docid": "23ffdf5e7797e7f01c6d57f1e5546026",
"text": "Classroom experiments that evaluate the effectiveness of educational technologies do not typically examine the effects of classroom contextual variables (e.g., out-of-software help-giving and external distractions). Yet these variables may influence students' instructional outcomes. In this paper, we introduce the Spatial Classroom Log Explorer (SPACLE): a prototype tool that facilitates the rapid discovery of relationships between within-software and out-of-software events. Unlike previous tools for retrospective analysis, SPACLE replays moment-by-moment analytics about student and teacher behaviors in their original spatial context. We present a data analysis workflow using SPACLE and demonstrate how this workflow can support causal discovery. We share the results of our initial replay analyses using SPACLE, which highlight the importance of considering spatial factors in the classroom when analyzing ITS log data. We also present the results of an investigation into the effects of student-teacher interactions on student learning in K-12 blended classrooms, using our workflow, which combines replay analysis with SPACLE and causal modeling. Our findings suggest that students' awareness of being monitored by their teachers may promote learning, and that \"gaming the system\" behaviors may extend outside of educational software use.",
"title": ""
},
{
"docid": "b2f66e8508978c392045b5f9e99362a1",
"text": "In this paper we have proposed a linguistically informed recursive neural network architecture for automatic extraction of cause-effect relations from text. These relations can be expressed in arbitrarily complex ways. The architecture uses word level embeddings and other linguistic features to detect causal events and their effects mentioned within a sentence. The extracted events and their relations are used to build a causal-graph after clustering and appropriate generalization, which is then used for predictive purposes. We have evaluated the performance of the proposed extraction model with respect to two baseline systems,one a rule-based classifier, and the other a conditional random field (CRF) based supervised model. We have also compared our results with related work reported in the past by other authors on SEMEVAL data set, and found that the proposed bidirectional LSTM model enhanced with an additional linguistic layer performs better. We have also worked extensively on creating new annotated datasets from publicly available data, which we are willing to share with the community.",
"title": ""
},
{
"docid": "a9346f8d40a8328e963774f2604da874",
"text": "Abstract-Sign language is a lingua among the speech and the hearing impaired community. It is hard for most people who are not familiar with sign language to communicate without an interpreter. Sign language recognition appertains to track and recognize the meaningful emotion of human made with fingers, hands, head, arms, face etc. The technique that has been proposed in this work, transcribes the gestures from a sign language to a spoken language which is easily understood by the hearing. The gestures that have been translated include alphabets, words from static images. This becomes more important for the people who completely rely on the gestural sign language for communication tries to communicate with a person who does not understand the sign language. We aim at representing features which will be learned by a technique known as convolutional neural networks (CNN), contains four types of layers: convolution layers, pooling/subsampling layers, nonlinear layers, and fully connected layers. The new representation is expected to capture various image features and complex non-linear feature interactions. A softmax layer will be used to recognize signs. Keywords-Convolutional Neural Networks, Softmax (key words) __________________________________________________*****_________________________________________________",
"title": ""
},
{
"docid": "881a0d8022142dc6200777835da2d323",
"text": "Muslim-majority countries do not use formal financial services (Honohon 2007).1 Even when financial services are available, some people view conventional products as incompatible with the financial principles set forth in Islamic law. In recent years, some microfinance institutions (MFIs) have stepped in to service low-income Muslim clients who demand products consistent with Islamic financial principles—leading to the emergence of Islamic microfinance as a new market niche.",
"title": ""
},
{
"docid": "f6cb93fe2e51bdfb82199a138c225c54",
"text": "Puberty suppression using gonadotropin-releasing-hormone analogues (GnRHa) has become increasingly accepted as an intervention during the early stages of puberty (Tanner stage 2–3) in individuals with clear signs of childhood-onset gender dysphoria. However, lowering the age threshold for using medical intervention for children with gender dysphoria is still a matter of contention, and is more controversial than treating the condition in adolescents and adults, as children with gender dysphoria are more likely to express an unstable pattern of gender variance. Furthermore, concerns have been expressed regarding the risks of puberty suppression, which are poorly understood, and the child's ability to make decisions and provide informed consent. However, even if the limited data available mean that it is not possible to make a conclusive treatment recommendation, some safety criteria for puberty suppression can be identified and applied.",
"title": ""
},
{
"docid": "ee947daebb5e560570edb1f3ad553b6e",
"text": "We consider the problem of embedding entities and relations of knowledge bases into low-dimensional continuous vector spaces (distributed representations). Unlike most existing approaches, which are primarily efficient for modelling pairwise relations between entities, we attempt to explicitly model both pairwise relations and long-range interactions between entities, by interpreting them as linear operators on the low-dimensional embeddings of the entities. Therefore, in this paper we introduces path ranking to capture the long-range interactions of knowledge graph and at the same time preserve the pairwise relations of knowledge graph; we call it structured embedding via pairwise relation and longrange interactions (referred to as SePLi). Comparing with the-state-of-the-art models, SePLi achieves better performances of embeddings.",
"title": ""
},
{
"docid": "a8a51268e3e4dc3b8dd5102dafcb8f36",
"text": "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node’s local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.",
"title": ""
},
{
"docid": "3613ae9cfcadee0053a270fe73c6e069",
"text": "Depth-map merging approaches have become more and more popular in multi-view stereo (MVS) because of their flexibility and superior performance. The quality of depth map used for merging is vital for accurate 3D reconstruction. While traditional depth map estimation has been performed in a discrete manner, we suggest the use of a continuous counterpart. In this paper, we first integrate silhouette information and epipolar constraint into the variational method for continuous depth map estimation. Then, several depth candidates are generated based on a multiple starting scales (MSS) framework. From these candidates, refined depth maps for each view are synthesized according to path-based NCC (normalized cross correlation) metric. Finally, the multiview depth maps are merged to produce 3D models. Our algorithm excels at detail capture and produces one of the most accurate results among the current algorithms for sparse MVS datasets according to the Middlebury benchmark. Additionally, our approach shows its outstanding robustness and accuracy in free-viewpoint video scenario.",
"title": ""
},
{
"docid": "3738d3c5d5bf4a3de55aa638adac07bb",
"text": "The term malware stands for malicious software. It is a program installed on a system without the knowledge of owner of the system. It is basically installed by the third party with the intention to steal some private data from the system or simply just to play pranks. This in turn threatens the computer’s security, wherein computer are used by one’s in day-to-day life as to deal with various necessities like education, communication, hospitals, banking, entertainment etc. Different traditional techniques are used to detect and defend these malwares like Antivirus Scanner (AVS), firewalls, etc. But today malware writers are one step forward towards then Malware detectors. Day-by-day they write new malwares, which become a great challenge for malware detectors. This paper focuses on basis study of malwares and various detection techniques which can be used to detect malwares.",
"title": ""
},
{
"docid": "49791684a7a455acc9daa2ca69811e74",
"text": "This paper analyzes the basic method of digital video image processing, studies the vehicle license plate recognition system based on image processing in intelligent transport system, presents a character recognition approach based on neural network perceptron to solve the vehicle license plate recognition in real-time traffic flow. Experimental results show that the approach can achieve better positioning effect, has a certain robustness and timeliness.",
"title": ""
},
{
"docid": "dc8712a71084b01c6ce1cc5fa4618d76",
"text": "Compensation is crucial for improving performance of inductive-power-transfer (IPT) converters. With proper compensation at some specific frequencies, an IPT converter can achieve load-independent constant output voltage or current, near zero reactive power, and soft switching of power switches simultaneously, resulting in simplified control circuitry, reduced component ratings, and improved power conversion efficiency. However, constant output voltage or current depends significantly on parameters of the transformer, which is often space constrained, making the converter design hard to optimize. To free the design from the constraints imposed by the transformer parameters, this paper proposes a family of higher order compensation circuits for IPT converters that achieves any desired constant-voltage or constant-current (CC) output with near zero reactive power and soft switching. Detailed derivation of the compensation method is given for the desired transfer function not constrained by transformer parameters. Prototypes of CC IPT configurations based on a single transformer are constructed to verify the analysis with three different output specifications.",
"title": ""
},
{
"docid": "bfd9b9c07b14acd064b2242b48e37ce2",
"text": "We propose a fully unsupervised framework for ad-hoc cross-lingual information retrieval (CLIR) which requires no bilingual data at all. The framework leverages shared cross-lingual word embedding spaces in which terms, queries, and documents can be represented, irrespective of their actual language. The shared embedding spaces are induced solely on the basis of monolingual corpora in two languages through an iterative process based on adversarial neural networks. Our experiments on the standard CLEF CLIR collections for three language pairs of varying degrees of language similarity (English-Dutch/Italian/Finnish) demonstrate the usefulness of the proposed fully unsupervised approach. Our CLIR models with unsupervised cross-lingual embeddings outperform baselines that utilize cross-lingual embeddings induced relying on word-level and document-level alignments. We then demonstrate that further improvements can be achieved by unsupervised ensemble CLIR models. We believe that the proposed framework is the first step towards development of effective CLIR models for language pairs and domains where parallel data are scarce or non-existent.",
"title": ""
},
{
"docid": "74136e5c4090cc990f62c399781c9bb3",
"text": "This paper compares statistical techniques for text classification using Naïve Bayes and Support Vector Machines, in context of Urdu language. A large corpus is used for training and testing purpose of the classifiers. However, those classifiers cannot directly interpret the raw dataset, so language specific preprocessing techniques are applied on it to generate a standardized and reduced-feature lexicon. Urdu language is morphological rich language which makes those tasks complex. Statistical characteristics of corpus and lexicon are measured which show satisfactory results of text preprocessing module. The empirical results show that Support Vector Machines outperform Naïve Bayes classifier in terms of classification accuracy.",
"title": ""
},
{
"docid": "f99316b4346666cc0ac45058f1d4e410",
"text": "Penetration testing is the process of detecting computer vulnerabilities and gaining access and data on targeted computer systems with goal to detect vulnerabilities and security issues and proactively protect system. In this paper we presented case of internal penetration test which helped to proactively prevent potential weaknesses of targeted system with inherited vulnerabilities which is Bring Your Own Device (BYOD). Many organizations suffer great losses due to risk materialization because of missing implementing standards for information security that includes patching, change management, active monitoring and penetration testing, with goal of better dealing with security vulnerabilities. With BYOD policy in place companies taking greater risk appetite allowing mobile device to be used on corporate networks. In this paper we described how we used network hacking techniques for penetration testing for the right cause which is to prevent potential misuse of computer vulnerabilities. This paper shows how different techniques and tools can be jointly used in step by step process to successfully perform penetration testing analysis and reporting.",
"title": ""
},
{
"docid": "41c99f4746fc299ae886b6274f899c4b",
"text": "The disruptive power of blockchain technologies represents a great opportunity to re-imagine standard practices of providing radio access services by addressing critical areas such as deployment models that can benefit from brand new approaches. As a starting point for this debate, we look at the current limits of infrastructure sharing, and specifically at the Small-Cell-as-a-Service trend, asking ourselves how we could push it to its natural extreme: a scenario in which any individual home or business user can become a service provider for mobile network operators (MNOs), freed from all the scalability and legal constraints that are inherent to the current modus operandi. We propose the adoption of smart contracts to implement simple but effective Service Level Agreements (SLAs) between small cell providers and MNOs, and present an example contract template based on the Ethereum blockchain.",
"title": ""
},
{
"docid": "b0be609048c8497f69991c7acc76dc9c",
"text": "We propose a novel recurrent neural network-based approach to simultaneously handle nested named entity recognition and nested entity mention detection. The model learns a hypergraph representation for nested entities using features extracted from a recurrent neural network. In evaluations on three standard data sets, we show that our approach significantly outperforms existing state-of-the-art methods, which are feature-based. The approach is also efficient: it operates linearly in the number of tokens and the number of possible output labels at any token. Finally, we present an extension of our model that jointly learns the head of each entity mention.",
"title": ""
}
] |
scidocsrr
|
a3e91f85e91dfef0530a43a5b7b10a44
|
Learning to Select Knowledge for Response Generation in Dialog Systems
|
[
{
"docid": "36c26d1be5d9ef1ffaf457246bbc3c90",
"text": "In knowledge grounded conversation, domain knowledge plays an important role in a special domain such as Music. The response of knowledge grounded conversation might contain multiple answer entities or no entity at all. Although existing generative question answering (QA) systems can be applied to knowledge grounded conversation, they either have at most one entity in a response or cannot deal with out-ofvocabulary entities. We propose a fully data-driven generative dialogue system GenDS that is capable of generating responses based on input message and related knowledge base (KB). To generate arbitrary number of answer entities even when these entities never appear in the training set, we design a dynamic knowledge enquirer which selects different answer entities at different positions in a single response, according to different local context. It does not rely on the representations of entities, enabling our model deal with out-ofvocabulary entities. We collect a human-human conversation data (ConversMusic) with knowledge annotations. The proposed method is evaluated on CoversMusic and a public question answering dataset. Our proposed GenDS system outperforms baseline methods significantly in terms of the BLEU, entity accuracy, entity recall and human evaluation. Moreover,the experiments also demonstrate that GenDS works better even on small datasets.",
"title": ""
},
{
"docid": "cffe9e1a98238998c174e93c73785576",
"text": "๏ The experimental results show that the proposed model effectively generate more diverse and meaningful responses involving more accurate relevant entities compared with the state-of-the-art baselines. We collect a multi-turn conversation corpus which includes not only facts related inquiries but also knowledge-based chit-chats. The data is publicly available at https:// github.com/liushuman/neural-knowledge-diffusion. We obtain the element information of each movie from https://movie.douban.com/ and build the knowledge base K. The question-answering dialogues and knowledge related chit-chat are crawled from https://zhidao.baidu.com/ and https://www.douban.com/group/. The conversations are grounded on the knowledge using NER, string match, and artificial scoring and filtering rules. The total 32977 conversations consisting of 104567 utterances are divided into training (32177) and testing set (800). Overview",
"title": ""
}
] |
[
{
"docid": "ba67c3006c6167550bce500a144e63f1",
"text": "This paper provides an overview of different methods for evaluating automatic summarization systems. The challenges in evaluating summaries are characterized. Both intrinsic and extrinsic approaches are discussed. Methods for assessing informativeness and coherence are described. The advantages and disadvantages of specific methods are assessed, along with criteria for choosing among them. The paper concludes with some suggestions for future directions.",
"title": ""
},
{
"docid": "f9076f4dbc5789e89ed758d0ad2c6f18",
"text": "This paper presents an innovative manner of obtaining discriminative texture signatures by using the LBP approach to extract additional sources of information from an input image and by using fractal dimension to calculate features from these sources. Four strategies, called Min, Max, Diff Min and Diff Max , were tested, and the best success rates were obtained when all of them were employed together, resulting in an accuracy of 99.25%, 72.50% and 86.52% for the Brodatz, UIUC and USPTex databases, respectively, using Linear Discriminant Analysis. These results surpassed all the compared methods in almost all the tests and, therefore, confirm that the proposed approach is an effective tool for texture analysis. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b0c86f449987ffe8a1dc3dfc39c66f73",
"text": "Smartphones are an ideal platform for local multiplayer games, thanks to their computational and networking capabilities as well as their popularity and portability. However, existing game engines do not exploit the locality of players to improve game latency. In this paper, we propose MicroPlay, a complete networking framework for local multiplayer mobile games. To the best of our knowledge, this is the first framework that exploits local connections between smartphones, and in particular, the broadcast nature of the wireless medium, to provide smooth, accurate rendering of all players with two desired properties. First, it performs direct-input rendering (i.e., without any inter- or extrapolation of game state) for all players; second, it provides very low game latency. We implement a MicroPlay prototype on Android phones, as well as an example multiplayer car racing game, called Racer, in order to demonstrate MicroPlay's capabilities. Our experiments show that cars can be rendered smoothly, without any prediction of state, and with only 20-30 ms game latency.",
"title": ""
},
{
"docid": "06dfc5bb4df3be7f9406be818efe28e7",
"text": "People often make decisions in health care that are not in their best interest, ranging from failing to enroll in health insurance to which they are entitled, to engaging in extremely harmful behaviors. Traditional economic theory provides a limited tool kit for improving behavior because it assumes that people make decisions in a rational way, have the mental capacity to deal with huge amounts of information and choice, and have tastes endemic to them and not open to manipulation. Melding economics with psychology, behavioral economics acknowledges that people often do not act rationally in the economic sense. It therefore offers a potentially richer set of tools than provided by traditional economic theory to understand and influence behaviors. Only recently, however, has it been applied to health care. This article provides an overview of behavioral economics, reviews some of its contributions, and shows how it can be used in health care to improve people's decisions and health.",
"title": ""
},
{
"docid": "3849284adb68f41831434afbf23be9ed",
"text": "Automatic estrus detection techniques in dairy cows have been present by different traits. Pedometers and accelerators are the most common sensor equipment. Most of the detection methods are associated with the supervised classification technique, which the training set becomes a crucial reference. The training set obtained by visual observation is subjective and time consuming. Another limitation of this approach is that it usually does not consider the factors affecting successful alerts, such as the discriminative figure, activity type of cows, the location and direction of the sensor node placed on the neck collar of a cow. This paper presents a novel estrus detection method that uses k-means clustering algorithm to create the training set online for each cow. And the training set is finally used to build an activity classification model by SVM. The activity index counted by the classification results in each sampling period can measure cow’s activity variation for assessing the onset of estrus. The experimental results indicate that the peak of estrus time are higher than that of non-estrus time at least twice in the activity index curve, and it can enhance the sensitivity and significantly reduce the error rate.",
"title": ""
},
{
"docid": "0f6183057c6b61cefe90e4fa048ab47f",
"text": "This paper investigates the use of Deep Bidirectional Long Short-Term Memory based Recurrent Neural Networks (DBLSTM-RNNs) for voice conversion. Temporal correlations across speech frames are not directly modeled in frame-based methods using conventional Deep Neural Networks (DNNs), which results in a limited quality of the converted speech. To improve the naturalness and continuity of the speech output in voice conversion, we propose a sequence-based conversion method using DBLSTM-RNNs to model not only the frame-wised relationship between the source and the target voice, but also the long-range context-dependencies in the acoustic trajectory. Experiments show that DBLSTM-RNNs outperform DNNs where Mean Opinion Scores are 3.2 and 2.3 respectively. Also, DBLSTM-RNNs without dynamic features have better performance than DNNs with dynamic features.",
"title": ""
},
{
"docid": "547423c409d466bcb537a7b0ae0e1758",
"text": "Sequential Bayesian estimation fornonlinear dynamic state-space models involves recursive estimation of filtering and predictive distributions of unobserved time varying signals based on noisy observations. This paper introduces a new filter called the Gaussian particle filter1. It is based on the particle filtering concept, and it approximates the posterior distributions by single Gaussians, similar to Gaussian filters like the extended Kalman filter and its variants. It is shown that under the Gaussianity assumption, the Gaussian particle filter is asymptotically optimal in the number of particles and, hence, has much-improved performance and versatility over other Gaussian filters, especially when nontrivial nonlinearities are present. Simulation results are presented to demonstrate the versatility and improved performance of the Gaussian particle filter over conventional Gaussian filters and the lower complexity than known particle filters. The use of the Gaussian particle filter as a building block of more complex filters is addressed in a companion paper.",
"title": ""
},
{
"docid": "3d007291b5ca2220c15e6eee72b94a76",
"text": "While the number of knowledge bases in the Semantic Web increases, the maintenance and creation of ontology schemata still remain a challenge. In particular creating class expressions constitutes one of the more demanding aspects of ontology engineering. In this article we describe how to adapt a semi-automatic method for learning OWL class expressions to the ontology engineering use case. Specifically, we describe how to extend an existing learning algorithm for the class learning problem. We perform rigorous performance optimization of the underlying algorithms for providing instant suggestions to the user. We also present two plugins, which use the algorithm, for the popular Protégé and OntoWiki ontology editors and provide a preliminary evaluation on real ontologies.",
"title": ""
},
{
"docid": "a53f798d24bb8bd7dc49d96439eefd28",
"text": "In recent times, the worldwide price of fuel is showing an upward surge. One of the major factors leading to this can be attributed to the exponential increase in demand. In a country like Canada, where a majority of the people own vehicles, and more being added to the roads, this demand for fuel is surely going to increase in the future and will also be severely damaging to the environment as transportation sector alone is responsible for a larger share of pollutants emitted into the atmosphere. Electric vehicles offer one way to reduce the level of emissions. Electric motor drives are an integral component of an electric vehicle and consist of one or more electric motors. In this paper an effort has been made to compare different characteristics of motor drives used in electric vehicles and also given is a comprehensive list of references papers published in the field of electric vehicles",
"title": ""
},
{
"docid": "c03e116de528bf16ecbec7f9bf65e87b",
"text": "Kelley's attribution theory is investigated. Subjects filled out a questionnaire that reported 16 different responses ostensibly made by other people. These responses represented four verb categories—emotions, accomplishments, opinions, and actions—and, for experimental subjects, each was accompanied by high or low consensus information, high or low distinctiveness information, and high or low consistency information. Control subjects were not given any information regarding the response. All subjects were asked to attribute each response to characteristics of the person (i.e., the actor), the stimulus, the circumstances, or to some combination of these three factors. In addition, the subjects' expectancies for future response and stimulus generalization on the part of the actor were measured. The three information variables and verb category each had a significant effect on causal attribution and on expectancy for behavioral generalization.",
"title": ""
},
{
"docid": "f672df401b24571f81648066b3181890",
"text": "We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations. A sufficiently powerful temporal model should separate predictable elements of the sequence from unpredictable elements, express uncertainty about those unpredictable elements, and rapidly identify novel elements that may help to predict the future. To create such models, we introduce Generative Temporal Models augmented with external memory systems. They are developed within the variational inference framework, which provides both a practical training methodology and methods to gain insight into the models’ operation. We show, on a range of problems with sparse, long-term temporal dependencies, that these models store information from early in a sequence, and reuse this stored information efficiently. This allows them to perform substantially better than existing models based on well-known recurrent neural networks, like LSTMs.",
"title": ""
},
{
"docid": "243c14b8ea40b697449200627a09a897",
"text": "Nowadays there is a lot of effort on the study, analysis and finding of new solutions related to high density sensor networks used as part of the IoT (Internet of Things) concept. LoRa (Long Range) is a modulation technique that enables the long-range transfer of information with a low transfer rate. This paper presents a review of the challenges and the obstacles of IoT concept with emphasis on the LoRa technology. A LoRaWAN network (Long Range Network Protocol) is of the Low Power Wide Area Network (LPWAN) type and encompasses battery powered devices that ensure bidirectional communication. The main contribution of the paper is the evaluation of the LoRa technology considering the requirements of IoT. In conclusion LoRa can be considered a suitable candidate in addressing the IoT challenges.",
"title": ""
},
{
"docid": "e50ba614fc997f058f8d495b59c18af5",
"text": "We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting semantic relations across the edit sequence. A computational implementation of the model achieves 70% accuracy and 89% precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.",
"title": ""
},
{
"docid": "23b62c158c71905cbafa2757525d3a84",
"text": "The automotive industry is experiencing a paradigm shift towards autonomous and connected vehicles. Coupled with the increasing usage and complexity of electrical and/or electronic systems, this introduces new safety and security risks. Encouragingly, the automotive industry has relatively well-known and standardised safety risk management practices, but security risk management is still in its infancy. In order to facilitate the derivation of security requirements and security measures for automotive embedded systems, we propose a specifically tailored risk assessment framework, and we demonstrate its viability with an industry use-case. Some of the key features are alignment with existing processes for functional safety, and usability for non-security specialists.\n The framework begins with a threat analysis to identify the assets, and threats to those assets. The following risk assessment process consists of an estimation of the threat level and of the impact level. This step utilises several existing standards and methodologies, with changes where necessary. Finally, a security level is estimated which is used to formulate high-level security requirements.\n The strong alignment with existing standards and processes should make this framework well-suited for the needs in the automotive industry.",
"title": ""
},
{
"docid": "48c9877043b59f3ed69aef3cbd807de7",
"text": "This paper presents an ontology-based approach for data quality inference on streaming observation data originating from large-scale sensor networks. We evaluate this approach in the context of an existing river basin monitoring program called the Intelligent River®. Our current methods for data quality evaluation are compared with the ontology-based inference methods described in this paper. We present an architecture that incorporates semantic inference into a publish/subscribe messaging middleware, allowing data quality inference to occur on real-time data streams. Our preliminary benchmark results indicate delays of 100ms for basic data quality checks based on an existing semantic web software framework. We demonstrate how these results can be maintained under increasing sensor data traffic rates by allowing inference software agents to work in parallel. These results indicate that data quality inference using the semantic sensor network paradigm is viable solution for data intensive, large-scale sensor networks.",
"title": ""
},
{
"docid": "2ba529e0c53554d7aa856a4766d45426",
"text": "Trauma in childhood is a psychosocial, medical, and public policy problem with serious consequences for its victims and for society. Chronic interpersonal violence in children is common worldwide. Developmental traumatology, the systemic investigation of the psychiatric and psychobiological effects of chronic overwhelming stress on the developing child, provides a framework and principles when empirically examining the neurobiological effects of pediatric trauma. This article focuses on peer-reviewed literature on the neurobiological sequelae of childhood trauma in children and in adults with histories of childhood trauma.",
"title": ""
},
{
"docid": "9259d540f93e06b3772eb05ac73369f2",
"text": "A compact reconfigurable rectifying antenna (rectenna) has been proposed for 5.2- and 5.8-GHz microwave power transmission. The proposed rectenna consists of a frequency reconfigurable microstrip antenna and a frequency reconfigurable rectifying circuit. Here, the use of the odd-symmetry mode has significantly cut down the antenna size by half. By controlling the switches installed in the antenna and the rectifying circuit, the rectenna is able to switch operation between 5.2 and 5.8 GHz. Simulated conversion efficiencies of 70.5% and 69.4% are achievable at the operating frequencies of 5.2 and 5.8 GHz, respectively, when the rectenna is given with an input power of 16.5 dBm. Experiment has been conducted to verify the design idea. Due to fabrication tolerances and parametric deviation of the actual diode, the resonant frequencies of the rectenna are measured to be 4.9 and 5.9 GHz. When supplied with input powers of 16 and 15 dBm, the measured maximum conversion efficiencies of the proposed rectenna are found to be 65.2% and 64.8% at 4.9 and 5.9 GHz, respectively, which are higher than its contemporary counterparts.",
"title": ""
},
{
"docid": "0328dd3393285e315347c311bdd421e6",
"text": "Generative adversarial networks (GANs) [7] are a recent approach to train generative models of data, which have been shown to work particularly well on image data. In the current paper we introduce a new model for texture synthesis based on GAN learning. By extending the input noise distribution space from a single vector to a whole spatial tensor, we create an architecture with properties well suited to the task of texture synthesis, which we call spatial GAN (SGAN). To our knowledge, this is the first successful completely data-driven texture synthesis method based on GANs.",
"title": ""
},
{
"docid": "ab2689cd60a72529d61ff7f03f43a5bd",
"text": "In order to enhance the efficiency of radio frequency identification (RFID) and lower system computational complexity, this paper proposes three novel tag anticollision protocols for passive RFID systems. The three proposed protocols are based on a binary tree slotted ALOHA (BTSA) algorithm. In BTSA, tags are randomly assigned to slots of a frame and if some tags collide in a slot, the collided tags in the slot will be resolved by binary tree splitting while the other tags in the subsequent slots will wait. The three protocols utilize a dynamic, an adaptive, and a splitting method to adjust the frame length to a value close to the number of tags, respectively. For BTSA, the identification efficiency can achieve an optimal value only when the frame length is close to the number of tags. Therefore, the proposed protocols efficiency is close to the optimal value. The advantages of the protocols are that, they do not need the estimation of the number of tags, and their efficiency is not affected by the variance of the number of tags. Computer simulation results show that splitting BTSA's efficiency can achieve 0.425, and the other two protocols efficiencies are about 0.40. Also, the results show that the protocols efficiency curves are nearly horizontal when the number of tags increases from 20 to 4,000.",
"title": ""
},
{
"docid": "ca655b741316e8c65b6b7590833396e1",
"text": "• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.",
"title": ""
}
] |
scidocsrr
|
a92324172cfd09afa05ef9065dc06edc
|
The Utility of Hello Messages for Determining Link Connectivity
|
[
{
"docid": "ef5f1aa863cc1df76b5dc057f407c473",
"text": "GLS is a new distributed location service which tracks mobile node locations. GLS combined with geographic forwarding allows the construction of ad hoc mobile networks that scale to a larger number of nodes than possible with previous work. GLS is decentralized and runs on the mobile nodes themselves, requiring no fixed infrastructure. Each mobile node periodically updates a small set of other nodes (its location servers) with its current location. A node sends its position updates to its location servers without knowing their actual identities, assisted by a predefined ordering of node identifiers and a predefined geographic hierarchy. Queries for a mobile node's location also use the predefined identifier ordering and spatial hierarchy to find a location server for that node.\nExperiments using the ns simulator for up to 600 mobile nodes show that the storage and bandwidth requirements of GLS grow slowly with the size of the network. Furthermore, GLS tolerates node failures well: each failure has only a limited effect and query performance degrades gracefully as nodes fail and restart. The query performance of GLS is also relatively insensitive to node speeds. Simple geographic forwarding combined with GLS compares favorably with Dynamic Source Routing (DSR): in larger networks (over 200 nodes) our approach delivers more packets, but consumes fewer network resources.",
"title": ""
}
] |
[
{
"docid": "30b1b4df0901ab61ab7e4cfb094589d1",
"text": "Direct modulation at 56 and 50 Gb/s of 1.3-μm InGaAlAs ridge-shaped-buried heterostructure (RS-BH) asymmetric corrugation-pitch-modulation (ACPM) distributed feedback lasers is experimentally demonstrated. The fabricated lasers have a low threshold current (5.6 mA at 85°C), high temperature characteristics (71 K), high slope relaxation frequency (3.2 GHz/mA1/2 at 85°C), and wide bandwidth (22.1 GHz at 85°C). These superior properties enable the lasers to run at 56 Gb/s and 55°C and 50 Gb/s at up to 80°C for backto-back operation with clear eye openings. This is achieved by the combination of a low-leakage RS-BH and an ACPM grating. Moreover, successful transmission of 56and 50-Gb/s modulated signals over a 10-km standard single-mode fiber is achieved. These results confirm the suitability of this type of laser for use as a cost-effective light source in 400 GbE and OTU5 applications.",
"title": ""
},
{
"docid": "701fb71923bb8a2fc90df725074f576b",
"text": "Quantum computing poses challenges to public key signatures as we know them today. LMS and XMSS are two hash based signature schemes that have been proposed in the IETF as quantum secure. Both schemes are based on well-studied hash trees, but their similarities and differences have not yet been discussed. In this work, we attempt to compare the two standards. We compare their security assumptions and quantify their signature and public key sizes. We also address the computation overhead they introduce. Our goal is to provide a clear understanding of the schemes’ similarities and differences for implementers and protocol designers to be able to make a decision as to which standard to chose.",
"title": ""
},
{
"docid": "56b42c551ad57c82ad15e6fc2e98f528",
"text": "Recent work has demonstrated that when artificial agents are limited in their ability to achieve their goals, the agent designer can benefit by making the agent’s goals different from the designer’s. This gives rise to the optimization problem of designing the artificial agent’s goals—in the RL framework, designing the agent’s reward function. Existing attempts at solving this optimal reward problem do not leverage experience gained online during the agent’s lifetime nor do they take advantage of knowledge about the agent’s structure. In this work, we develop a gradient ascent approach with formal convergence guarantees for approximately solving the optimal reward problem online during an agent’s lifetime. We show that our method generalizes a standard policy gradient approach, and we demonstrate its ability to improve reward functions in agents with various forms of limitations. 1 The Optimal Reward Problem In this work, we consider the scenario of an agent designer building an autonomous agent. The designer has his or her own goals which must be translated into goals for the autonomous agent. We represent goals using the Reinforcement Learning (RL) formalism of the reward function. This leads to the optimal reward problem of designing the agent’s reward function so as to maximize the objective reward received by the agent designer. Typically, the designer assigns his or her own reward to the agent. However, there is ample work which demonstrates the benefit of assigning reward which does not match the designer’s. For example, work on reward shaping [11] has shown how to modify rewards to accelerate learning without altering the optimal policy, and PAC-MDP methods [5, 20] including approximate Bayesian methods [7, 19] add bonuses to the objective reward to achieve optimism under uncertainty. These approaches explicitly or implicitly assume that the asymptotic behavior of the agent should be the same as that which would occur using the objective reward function. These methods do not explicitly consider the optimal reward problem; however, they do show improved performance through reward modification. In our recent work that does explicitly consider the optimal reward problem [18], we analyzed an explicit hypothesis about the benefit of reward design—that it helps mitigate the performance loss caused by computational constraints (bounds) on agent architectures. We considered various types of agent limitations—limits on planning depth, failure to account for partial observability, and other erroneous modeling assumptions—and demonstrated the benefits of good reward functions in each case empirically. Crucially, in bounded agents, the optimal reward function often leads to behavior that is different from the asymptotic behavior achieved with the objective reward function. In this work, we develop an algorithm, Policy Gradient for Reward Design (PGRD), for improving reward functions for a family of bounded agents that behave according to repeated local (from the current state) model-based planning. We show that this algorithm is capable of improving the reward functions in agents with computational limitations necessitating small bounds on the depth of planning, and also from the use of an inaccurate model (which may be inaccurate due to computationally-motivated approximations). PGRD has few parameters, improves the reward",
"title": ""
},
{
"docid": "09132f8695e6f8d32d95a37a2bac46ee",
"text": "Social media has become one of the main channels for people to access and consume news, due to the rapidness and low cost of news dissemination on it. However, such properties of social media also make it a hotbed of fake news dissemination, bringing negative impacts on both individuals and society. Therefore, detecting fake news has become a crucial problem attracting tremendous research effort. Most existing methods of fake news detection are supervised, which require an extensive amount of time and labor to build a reliably annotated dataset. In search of an alternative, in this paper, we investigate if we could detect fake news in an unsupervised manner. We treat truths of news and users’ credibility as latent random variables, and exploit users’ engagements on social media to identify their opinions towards the authenticity of news. We leverage a Bayesian network model to capture the conditional dependencies among the truths of news, the users’ opinions, and the users’ credibility. To solve the inference problem, we propose an efficient collapsed Gibbs sampling approach to infer the truths of news and the users’ credibility without any labelled data. Experiment results on two datasets show that the proposed method significantly outperforms the compared unsupervised methods.",
"title": ""
},
{
"docid": "e729d7b399b3a4d524297ae79b28f45d",
"text": "The aim of this paper is to solve optimal design problems for industrial applications when the objective function value requires the evaluation of expensive simulation codes and its first derivatives are not available. In order to achieve this goal we propose two new algorithms that draw inspiration from two existing approaches: a filled function based algorithm and a Particle Swarm Optimization method. In order to test the efficiency of the two proposed algorithms, we perform a numerical comparison both with the methods we drew inspiration from, and with some standard Global Optimization algorithms that are currently adopted in industrial design optimization. Finally, a realistic ship design problem, namely the reduction of the amplitude of the heave motion of a ship advancing in head seas (a problem connected to both safety and comfort), is solved using the new codes and other global and local derivativeThis work has been partially supported by the Ministero delle Infrastrutture e dei Trasporti in the framework of the research plan “Programma di Ricerca sulla Sicurezza”, Decreto 17/04/2003 G.U. n. 123 del 29/05/2003, by MIUR, FIRB 2001 Research Program Large-Scale Nonlinear Optimization and by the U.S. Office of Naval Research (NICOP grant N. 000140510617). E.F. Campana ( ) · D. Peri · A. Pinto INSEAN—Istituto Nazionale per Studi ed Esperienze di Architettura Navale, Via di Vallerano 139, 00128 Roma, Italy e-mail: [email protected] G. Liuzzi Consiglio Nazionale delle Ricerche, Istituto di Analisi dei Sistemi ed Informatica “A. Ruberti”, Viale Manzoni 30, 00185 Roma, Italy S. Lucidi Dipartimento di Informatica e Sistemistica “A. Ruberti”, Università degli Studi di Roma “Sapienza”, Via Ariosto 25, 00185 Roma, Italy V. Piccialli Dipartimento di Ingegneria dell’Impresa, Università degli Studi di Roma “Tor Vergata”, Via del Policlinico 1, 00133 Roma, Italy 534 E.F. Campana et al. free optimization methods. All the numerical results show the effectiveness of the two new algorithms.",
"title": ""
},
{
"docid": "e95649b06c70682ba4229cff11fefeaf",
"text": "In this paper, we present Black SDN, a Software Defined Networking (SDN) architecture for secure Internet of Things (IoT) networking and communications. SDN architectures were developed to provide improved routing and networking performance for broadband networks by separating the control plain from the data plain. This basic SDN concept is amenable to IoT networks, however, the common SDN implementations designed for wired networks are not directly amenable to the distributed, ad hoc, low-power, mesh networks commonly found in IoT systems. SDN promises to improve the overall lifespan and performance of IoT networks. However, the SDN architecture changes the IoT network's communication patterns, allowing new types of attacks, and necessitating a new approach to securing the IoT network. Black SDN is a novel SDN-based secure networking architecture that secures both the meta-data and the payload within each layer of an IoT communication packet while utilizing the SDN centralized controller as a trusted third party for secure routing and optimized system performance management. We demonstrate through simulation the feasibility of Black SDN in networks where nodes are asleep most of their lives, and specifically examine a Black SDN IoT network based upon the IEEE 802.15.4 LR WPAN (Low Rate - Wireless Personal Area Network) protocol.",
"title": ""
},
{
"docid": "01d74a3a50d1121646ddab3ea46b5681",
"text": "Sleep quality is important, especially given the considerable number of sleep-related pathologies. The distribution of sleep stages is a highly effective and objective way of quantifying sleep quality. As a standard multi-channel recording used in the study of sleep, polysomnography (PSG) is a widely used diagnostic scheme in sleep medicine. However, the standard process of sleep clinical test, including PSG recording and manual scoring, is complex, uncomfortable, and time-consuming. This process is difficult to implement when taking the whole PSG measurements at home for general healthcare purposes. This work presents a novel sleep stage classification system, based on features from the two forehead EEG channels FP1 and FP2. By recording EEG from forehead, where there is no hair, the proposed system can monitor physiological changes during sleep in a more practical way than previous systems. Through a headband or self-adhesive technology, the necessary sensors can be applied easily by users at home. Analysis results demonstrate that classification performance of the proposed system overcomes the individual differences between different participants in terms of automatically classifying sleep stages. Additionally, the proposed sleep stage classification system can identify kernel sleep features extracted from forehead EEG, which are closely related with sleep clinician's expert knowledge. Moreover, forehead EEG features are classified into five sleep stages by using the relevance vector machine. In a leave-one-subject-out cross validation analysis, we found our system to correctly classify five sleep stages at an average accuracy of 76.7 ± 4.0 (SD) % [average kappa 0.68 ± 0.06 (SD)]. Importantly, the proposed sleep stage classification system using forehead EEG features is a viable alternative for measuring EEG signals at home easily and conveniently to evaluate sleep quality reliably, ultimately improving public healthcare.",
"title": ""
},
{
"docid": "6b1dc94c4c70e1c78ea32a760b634387",
"text": "3d reconstruction from a single image is inherently an ambiguous problem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of singleimage depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each other. In this paper, we focus on the problem of automatic 3d reconstruction of indoor scenes, specifically ones (sometimes called “Manhattan worlds”) that consist mainly of orthogonal planes. We use a Markov random field (MRF) model to identify the different planes and edges in the scene, as well as their orientations. Then, an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction. Our approach is fully automatic—given an input image, no human intervention is necessary to obtain an approximate 3d reconstruction.",
"title": ""
},
{
"docid": "a341bcf8efb975c078cc452e0eecc183",
"text": "We show that, during inference with Convolutional Neural Networks (CNNs), more than 2× to 8× ineffectual work can be exposed if instead of targeting those weights and activations that are zero, we target different combinations of value stream properties. We demonstrate a practical application with Bit-Tactical (TCL), a hardware accelerator which exploits weight sparsity, per layer precision variability and dynamic fine-grain precision reduction for activations, and optionally the naturally occurring sparse effectual bit content of activations to improve performance and energy efficiency. TCL benefits both sparse and dense CNNs, natively supports both convolutional and fully-connected layers, and exploits properties of all activations to reduce storage, communication, and computation demands. While TCL does not require changes to the CNN to deliver benefits, it does reward any technique that would amplify any of the aforementioned weight and activation value properties. Compared to an equivalent data-parallel accelerator for dense CNNs, TCLp, a variant of TCL improves performance by 5.05× and is 2.98× more energy efficient while requiring 22% more area.",
"title": ""
},
{
"docid": "5700ba2411f9b4e4ed59c8c5839dc87d",
"text": "Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g., MSR-RF: C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.",
"title": ""
},
{
"docid": "081c350100f4db11818c75507f715cda",
"text": "Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.",
"title": ""
},
{
"docid": "051c530bf9d49bf1066ddf856488dff1",
"text": "This review paper focusses on DESMO-J, a comprehensive and stable Java-based open-source simulation library. DESMO-J is recommended in numerous academic publications for implementing discrete event simulation models for various applications. The library was integrated into several commercial software products. DESMO-J’s functional range and usability is continuously improved by the Department of Informatics of the University of Hamburg (Germany). The paper summarizes DESMO-J’s core functionality and important design decisions. It also compares DESMO-J to other discrete event simulation frameworks. Furthermore, latest developments and new opportunities are addressed in more detail. These include a) improvements relating to the quality and applicability of the software itself, e.g. a port to .NET, b) optional extension packages like visualization libraries and c) new components facilitating a more powerful and flexible simulation logic, like adaption to real time or a compact representation of production chains and similar queuing systems. Finally, the paper exemplarily describes how to apply DESMO-J to harbor logistics and business process modeling, thus providing insights into DESMO-J practice.",
"title": ""
},
{
"docid": "dce75562a7e8b02364d39fd7eb407748",
"text": "The ability to predict future user activity is invaluable when it comes to content recommendation and personalization. For instance, knowing when users will return to an online music service and what they will listen to increases user satisfaction and therefore user retention.\n We present a model based on Long-Short Term Memory to estimate when a user will return to a site and what their future listening behavior will be. In doing so, we aim to solve the problem of Just-In-Time recommendation, that is, to recommend the right items at the right time. We use tools from survival analysis for return time prediction and exponential families for future activity analysis. We show that the resulting multitask problem can be solved accurately, when applied to two real-world datasets.",
"title": ""
},
{
"docid": "9dde89f24f55602e21823620b49633dd",
"text": "Darier's disease is a rare late-onset genetic disorder of keratinisation. Mosaic forms of the disease characterised by localised and unilateral keratotic papules carrying post-zygotic ATP2A2 mutation in affected areas have been documented. Segmental forms of Darier's disease are classified into two clinical subtypes: type 1 manifesting with distinct lesions on a background of normal appearing skin and type 2 with well-defined areas of Darier's disease occurring on a background of less severe non-mosaic phenotype. Herein we describe two cases of type 1 segmental Darier's disease with favourable response to topical retinoids.",
"title": ""
},
{
"docid": "c0c064fdc011973848568f5b087ba20b",
"text": "’InfoVis novices’ have been found to struggle with visual data exploration. A ’conversational interface’ which would take natural language inputs to visualization generation and modification, while maintaining a history of the requests, visualizations and findings of the user, has the potential to ameliorate many of these challenges. We present Articulate2, initial work toward a conversational interface to visual data exploration.",
"title": ""
},
{
"docid": "0b024671e04090051292b5e76a4690ae",
"text": "The brain has evolved in this multisensory context to perceive the world in an integrated fashion. Although there are good reasons to be skeptical of the influence of cognition on perception, here we argue that the study of sensory substitution devices might reveal that perception and cognition are not necessarily distinct, but rather continuous aspects of our information processing capacities.",
"title": ""
},
{
"docid": "25828231caaf3288ed4fdb27df7f8740",
"text": "This paper reports on an algorithm to support autonomous vehicles in reasoning about occluded regions of their environment to make safe, reliable decisions. In autonomous driving scenarios, other traffic participants are often occluded from sensor measurements by buildings or large vehicles like buses or trucks, which makes tracking dynamic objects challenging.We present a method to augment standard dynamic object trackers with means to 1) estimate the occluded state of other traffic agents and 2) robustly associate the occluded estimates with new observations after the tracked object reenters the visible region of the sensor horizon. We perform occluded state estimation using a dynamics model that accounts for the driving behavior of traffic agents and a hybrid Gaussian mixture model (hGMM) to capture multiple hypotheses over discrete behavior, such as driving along different lanes or turning left or right at an intersection. Upon new observations, we associate them to existing estimates in terms of the Kullback-Leibler divergence (KLD). We evaluate the proposed method in simulation and using a real-world traffic-tracking dataset from an autonomous vehicle platform. Results show that our method can handle significantly prolonged occlusions when compared to a standard dynamic object tracking system.",
"title": ""
},
{
"docid": "2318fbd8ca703c0ff5254606b8dce442",
"text": "Historically, the inspection and maintenance of high-voltage power lines have been performed by linemen using various traditional means. In recent years, the use of robots appeared as a new and complementary method of performing such tasks, as several initiatives have been explored around the world. Among them is the teleoperated robotic platform called LineScout Technology, developed by Hydro-Québec, which has the capacity to clear most obstacles found on the grid. Since its 2006 introduction in the operations, it is considered by many utilities as the pioneer project in the domain. This paper’s purpose is to present the mobile platform design and its main mechatronics subsystems to support a comprehensive description of the main functions and application modules it offers. This includes sensors and a compact modular arm equipped with tools to repair cables and broken conductor strands. This system has now been used on many occasions to assess the condition of power line infrastructure and some results are presented. Finally, future developments and potential technologies roadmap are briefly discussed.",
"title": ""
}
] |
scidocsrr
|
f2dfe17a41550f3ee1fca7d51438e76c
|
Open source real-time control software for the Kuka light weight robot
|
[
{
"docid": "81b03da5e09cb1ac733c966b33d0acb1",
"text": "Abstrud In the last two years a third generation of torque-controlled light weight robots has been developed in DLR‘s robotics and mechatronics lab which is based on all the experiences that have been made with the first two generations. It aims at reaching the limits of what seems achievable with present day technologies not only with respect to light-weight, but also with respect to minimal power consumption and losses. One of the main gaps we tried to close in version III was the development of a new, robot-dedicated high energy motor designed with the best available techniques of concurrent engineering, and the renewed efforts to save weight in the links by using ultralight carbon fibres.",
"title": ""
}
] |
[
{
"docid": "2b9b9c1a012cd9f549acef26cd8b0156",
"text": "VAES stands for variable AES. VAES3 is the third generation format-preserving encryption algorithm that was developed in a report [4] simultaneously with the comprehensive paper on FPE [1] and subsequently updated slightly to be in concert with the FFX standard proposal. The standard proposal of FFX includes, in an appendix, example instantiations called A2 and A10. A follow on addendum [3] includes an instantiation called FFX[radix] . The stated intent of FFX is that it is a framework under which many implementations are compliant. The VAES3 scheme is compliant to those requirements. VAES3 was designed to meet security goals and requirements beyond the original example instantiations, and its design goals are slightly different than those of FFX[radix]. One of the unique features of VAES3 is a subkey step that enhances security and lengthens the lifetime of the key.",
"title": ""
},
{
"docid": "7b4dd695182f7e15e58f44e309bf897c",
"text": "Phosphorus is one of the most abundant elements preserved in earth, and it comprises a fraction of ∼0.1% of the earth crust. In general, phosphorus has several allotropes, and the two most commonly seen allotropes, i.e. white and red phosphorus, are widely used in explosives and safety matches. In addition, black phosphorus, though rarely mentioned, is a layered semiconductor and has great potential in optical and electronic applications. Remarkably, this layered material can be reduced to one single atomic layer in the vertical direction owing to the van der Waals structure, and is known as phosphorene, in which the physical properties can be tremendously different from its bulk counterpart. In this review article, we trace back to the research history on black phosphorus of over 100 years from the synthesis to material properties, and extend the topic from black phosphorus to phosphorene. The physical and transport properties are highlighted for further applications in electronic and optoelectronics devices.",
"title": ""
},
{
"docid": "4a22a7dbcd1515e2b1b6e7748ffa3e02",
"text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.",
"title": ""
},
{
"docid": "52a1f1de8db1a9aca14cb4df2395868b",
"text": "We propose a new approach to localizing handle-like grasp affordances in 3-D point clouds. The main idea is to identify a set of sufficient geometric conditions for the existence of a grasp affordance and to search the point cloud for neighborhoods that satisfy these conditions. Our goal is not to find all possible grasp affordances, but instead to develop a method of localizing important types of grasp affordances quickly and reliably. The strength of this method relative to other current approaches is that it is very practical: it can have good precision/recall for the types of affordances under consideration, it runs in real-time, and it is easy to adapt to different robots and operating scenarios. We validate with a set of experiments where the approach is used to enable the Rethink Baxter robot to localize and grasp unmodelled objects.",
"title": ""
},
{
"docid": "aa88b71c68ed757faf9eb896a81003f5",
"text": "Purpose The present study evaluated the platelet distribution pattern and growth factor release (VEGF, TGF-β1 and EGF) within three PRF (platelet-rich-fibrin) matrices (PRF, A-PRF and A-PRF+) that were prepared using different relative centrifugation forces (RCF) and centrifugation times. Materials and methods immunohistochemistry was conducted to assess the platelet distribution pattern within three PRF matrices. The growth factor release was measured over 10 days using ELISA. Results The VEGF protein content showed the highest release on day 7; A-PRF+ showed a significantly higher rate than A-PRF and PRF. The accumulated release on day 10 was significantly higher in A-PRF+ compared with A-PRF and PRF. TGF-β1 release in A-PRF and A-PRF+ showed significantly higher values on days 7 and 10 compared with PRF. EGF release revealed a maximum at 24 h in all groups. Toward the end of the study, A-PRF+ demonstrated significantly higher EGF release than PRF. The accumulated growth factor releases of TGF-β1 and EGF on day 10 were significantly higher in A-PRF+ and A-PRF than in PRF. Moreover, platelets were located homogenously throughout the matrix in the A-PRF and A-PRF+ groups, whereas platelets in PRF were primarily observed within the lower portion. Discussion the present results show an increase growthfactor release by decreased RCF. However, further studies must be conducted to examine the extent to which enhancing the amount and the rate of released growth factors influence wound healing and biomaterial-based tissue regeneration. Conclusion These outcomes accentuate the fact that with a reduction of RCF according to the previously LSCC (described low speed centrifugation concept), growth factor release can be increased in leukocytes and platelets within the solid PRF matrices.",
"title": ""
},
{
"docid": "427bdf9dc6462c2569956745eaee6a1b",
"text": "Because there is increasing concern about low-back disability and its current medical management, this analysis attempts to construct a new theoretic framework for treatment. Observations of natural history and epidemiology suggest that low-back pain should be a benign, self-limiting condition, that low back-disability as opposed to pain is a relatively recent Western epidemic, and that the role of medicine in that epidemic must be critically examined. The traditional medical model of disease is contrasted with a biopsychosocial model of illness to analyze success and failure in low-back disorders. Studies of the mathematical relationship between the elements of illness in chronic low-back pain suggest that the biopsychosocial concept can be used as an operational model that explains many clinical observations. This model is used to compare rest and active rehabilitation for low-back pain. Rest is the commonest treatment prescribed after analgesics but is based on a doubtful rationale, and there is little evidence of any lasting benefit. There is, however, little doubt about the harmful effects--especially of prolonged bed rest. Conversely, there is no evidence that activity is harmful and, contrary to common belief, it does not necessarily make the pain worse. Experimental studies clearly show that controlled exercises not only restore function, reduce distress and illness behavior, and promote return to work, but actually reduce pain. Clinical studies confirm the value of active rehabilitation in practice. To achieve the goal of treating patients rather than spines, we must approach low-back disability as an illness rather than low-back pain as a purely physical disease. We must distinguish pain as a purely the symptoms and signs of distress and illness behavior from those of physical disease, and nominal from substantive diagnoses. Management must change from a negative philosophy of rest for pain to more active restoration of function. Only a new model and understanding of illness by physicians and patients alike makes real change possible.",
"title": ""
},
{
"docid": "989cdc80521e1c8761f733ad3ed49d79",
"text": "The wide availability of sensing devices in the medical domain causes the creation of large and very large data sets. Hence, tasks as the classification in such data sets becomes more and more difficult. Deep Neural Networks (DNNs) are very effective in classification, yet finding the best values for their hyper-parameters is a difficult and time-consuming task. This paper introduces an approach to decrease execution times to automatically find good hyper-parameter values for DNN through Evolutionary Algorithms when classification task is faced. This decrease is obtained through the combination of two mechanisms. The former is constituted by a distributed version for a Differential Evolution algorithm. The latter is based on a procedure aimed at reducing the size of the training set and relying on a decomposition into cubes of the space of the data set attributes. Experiments are carried out on a medical data set about Obstructive Sleep Anpnea. They show that sub-optimal DNN hyper-parameter values are obtained in a much lower time with respect to the case where this reduction is not effected, and that this does not come to the detriment of the accuracy in the classification over the test set items.",
"title": ""
},
{
"docid": "e2a605f5c22592bd5ca828d4893984be",
"text": "Deep neural networks are complex and opaque. As they enter application in a variety of important and safety critical domains, users seek methods to explain their output predictions. We develop an approach to explaining deep neural networks by constructing causal models on salient concepts contained in a CNN. We develop methods to extract salient concepts throughout a target network by using autoencoders trained to extract humanunderstandable representations of network activations. We then build a bayesian causal model using these extracted concepts as variables in order to explain image classification. Finally, we use this causal model to identify and visualize features with significant causal influence on final classification.",
"title": ""
},
{
"docid": "22658b675b501059ec5a7905f6b766ef",
"text": "The purpose of this study was to compare the physiological results of 2 incremental graded exercise tests (GXTs) and correlate these results with a short-distance laboratory cycle time trial (TT). Eleven men (age 25 +/- 5 years, Vo(2)max 62 +/- 8 ml.kg(-1).min(-1)) randomly underwent 3 laboratory tests performed on a cycle ergometer. The first 2 tests consisted of a GXT consisting of either 3-minute (GXT(3-min)) or 5-minute (GXT(5-min)) workload increments. The third test involved 1 laboratory 30-minute TT. The peak power output, lactate threshold, onset of blood lactate accumulation, and maximum displacement threshold (Dmax) determined from each GXT was not significantly different and in agreement when measured from the GXT(3-min) or GXT(5-min). Furthermore, similar correlation coefficients were found among the results of each GXT and average power output in the 30-minute cycling TT. Hence, the results of either GXT can be used to predict performance or for training prescription.",
"title": ""
},
{
"docid": "343a2035ca2136bc38451c0e92aeb7fc",
"text": "Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations.",
"title": ""
},
{
"docid": "c90eae76dbde16de8d52170c2715bd7a",
"text": "Several literatures converge on the idea that approach and avoidance/withdrawal behaviors are managed by two partially distinct self-regulatory system. The functions of these systems also appear to be embodied in discrepancyreducing and -enlarging feedback loops, respectively. This article describes how the feedback construct has been used to address these two classes of action and the affective experiences that relate to them. Further discussion centers on the development of measures of individual differences in approach and avoidance tendencies, and how these measures can be (and have been) used as research tools, to investigate whether other phenomena have their roots in approach or avoidance.",
"title": ""
},
{
"docid": "fd87b56e57b6750aa0e018724f5ba975",
"text": "An effective design of effective and efficient self-adaptive systems may rely on several existing approaches. Software models and model checking techniques at run time represent one of them since they support automatic reasoning about such changes, detect harmful configurations, and potentially enable appropriate (self-)reactions. However, traditional model checking techniques and tools may not be applied as they are at run time, since they hardly meet the constraints imposed by on-the-fly analysis, in terms of execution time and memory occupation. For this reason, efficient run-time model checking represents a crucial research challenge. This paper precisely addresses this issue and focuses on probabilistic run-time model checking in which reliability models are given in terms of Discrete Time Markov Chains which are verified at run-time against a set of requirements expressed as logical formulae. In particular, the paper discusses the use of probabilistic model checking at run-time for selfadaptive systems by surveying and comparing the existing approaches divided in two categories: state-elimination algorithms and algebra-based algorithms. The discussion is supported by a realistic example and by empirical experiments.",
"title": ""
},
{
"docid": "e4e0e01b3af99dfd88ff03a1057b40d3",
"text": "There is a tension between user and author control of narratives in multimedia systems and virtual environments. Reducing the interactivity gives the author more control over when and how users experience key events in a narrative, but may lead to less immersion and engagement. Allowing the user to freely explore the virtual space introduces the risk that important narrative events will never be experienced. One approach to striking a balance between user freedom and author control is adaptation of narrative event presentation (i.e. changing the time, location, or method of presentation of a particular event in order to better communicate with the user). In this paper, we describe the architecture of a system capable of dynamically supporting narrative event adaptation. We also report results from two studies comparing adapted narrative presentation with two other forms of unadapted presentation - events with author selected views (movie), and events with user selected views (traditional VE). An analysis of user performance and feedback offers support for the hypothesis that adaptation can improve comprehension of narrative events in virtual environments while maintaining a sense of user control.",
"title": ""
},
{
"docid": "047480185afbea439eee2ee803b9d1f9",
"text": "The ability to perceive and analyze terrain is a key problem in mobile robot navigation. Terrain perception problems arise in planetary robotics, agriculture, mining, and, of course, self-driving cars. Here, we introduce the PTA (probabilistic terrain analysis) algorithm for terrain classification with a fastmoving robot platform. The PTA algorithm uses probabilistic techniques to integrate range measurements over time, and relies on efficient statistical tests for distinguishing drivable from nondrivable terrain. By using probabilistic techniques, PTA is able to accommodate severe errors in sensing, and identify obstacles with nearly 100% accuracy at speeds of up to 35mph. The PTA algorithm was an essential component in the DARPA Grand Challenge, where it enabled our robot Stanley to traverse the entire course in record time.",
"title": ""
},
{
"docid": "1573dcbb7b858ab6802018484f00ef91",
"text": "There is a multitude of tools available for Business Model Innovation (BMI). However, Business models (BM) and supporting tools are not yet widely known by micro, small and medium sized companies (SMEs). In this paper, we build on analysis of 61 cases to present typical BMI paths of European SMEs. Firstly, we constructed two paths for established companies that we named as 'I want to grow' and 'I want to make my business profitable'. We also found one path for start-ups: 'I want to start a new business'. Secondly, we suggest appropriate BM toolsets for the three paths. The identified paths and related tools contribute to BMI research and practise with an aim to boost BMI in SMEs.",
"title": ""
},
{
"docid": "3cb6829b876787018856abfaf63f05ad",
"text": "BACKGROUND\nRhinoplasty remains one of the most challenging operations, as exemplified in the Middle Eastern patient. The ill-defined, droopy tip, wide and high dorsum, and thick skin envelope mandate meticulous attention to preoperative evaluation and efficacious yet safe surgical maneuvers. The authors provide a systematic approach to evaluation and improvement of surgical outcomes in this patient population.\n\n\nMETHODS\nA retrospective, 3-year review identified patients of Middle Eastern heritage who underwent primary rhinoplasty and those who did not but had nasal photographs. Photographs and operative records (when applicable) were reviewed. Specific nasal characteristics, component-directed surgical techniques, and aesthetic outcomes were delineated.\n\n\nRESULTS\nThe Middle Eastern nose has a combination of specific nasal traits, with some variability, including thick/sebaceous skin (excess fibrofatty tissue), high/wide dorsum with cartilaginous and bony humps, ill-defined nasal tip, weak/thin lateral crura relative to the skin envelope, nostril-tip imbalance, acute nasolabial and columellar-labial angles, and a droopy/hyperdynamic nasal tip. An aggressive yet nondestructive surgical approach to address the nasal imbalance often requires soft-tissue debulking, significant cartilaginous framework modification (with augmentation/strengthening), tip refinement/rotation/projection, low osteotomies, and depressor septi nasi muscle treatment. The most common postoperative defects were related to soft-tissue scarring, thickened skin envelope, dorsum irregularities, and prolonged edema in the supratip/tip region.\n\n\nCONCLUSIONS\nIt is critical to improve the strength of the cartilaginous framework with respect to the thick, noncontractile skin/soft-tissue envelope, particularly when moderate to large dorsal reduction is required. A multitude of surgical maneuvers are often necessary to address all the salient characteristics of the Middle Eastern nose and to produce the desired aesthetic result.",
"title": ""
},
{
"docid": "39180c1e2636a12a9d46d94fe3ebfa65",
"text": "We present a novel machine learning based algorithm extending the interaction space around mobile devices. The technique uses only the RGB camera now commonplace on off-the-shelf mobile devices. Our algorithm robustly recognizes a wide range of in-air gestures, supporting user variation, and varying lighting conditions. We demonstrate that our algorithm runs in real-time on unmodified mobile devices, including resource-constrained smartphones and smartwatches. Our goal is not to replace the touchscreen as primary input device, but rather to augment and enrich the existing interaction vocabulary using gestures. While touch input works well for many scenarios, we demonstrate numerous interaction tasks such as mode switches, application and task management, menu selection and certain types of navigation, where such input can be either complemented or better served by in-air gestures. This removes screen real-estate issues on small touchscreens, and allows input to be expanded to the 3D space around the device. We present results for recognition accuracy (93% test and 98% train), impact of memory footprint and other model parameters. Finally, we report results from preliminary user evaluations, discuss advantages and limitations and conclude with directions for future work.",
"title": ""
},
{
"docid": "3b5555c5624fc11bbd24cfb8fff669f0",
"text": "Redundancy resolution is a critical problem in the control of robotic manipulators. Recurrent neural networks (RNNs), as inherently parallel processing models for time-sequence processing, are potentially applicable for the motion control of manipulators. However, the development of neural models for high-accuracy and real-time control is a challenging problem. This paper identifies two limitations of the existing RNN solutions for manipulator control, i.e., position error accumulation and the convex restriction on the projection set, and overcomes them by proposing two modified neural network models. Our method allows nonconvex sets for projection operations, and control error does not accumulate over time in the presence of noise. Unlike most works in which RNNs are used to process time sequences, the proposed approach is model-based and training-free, which makes it possible to achieve fast tracking of reference signals with superior robustness and accuracy. Theoretical analysis reveals the global stability of a system under the control of the proposed neural networks. Simulation results confirm the effectiveness of the proposed control method in both the position regulation and tracking control of redundant PUMA 560 manipulators.",
"title": ""
},
{
"docid": "a7bf370e83bd37ed4f83c3846cfaaf97",
"text": "This paper presents the design and implementation of an evanescent tunable combline filter based on electronic tuning with the use of RF-MEMS capacitor banks. The use of MEMS tuning circuit results in the compact implementation of the proposed filter with high-Q and near to zero DC power consumption. The proposed filter consist of combline resonators with tuning disks that are loaded with RF-MEMS capacitor banks. A two-pole filter is designed and measured based on the proposed tuning concept. The filter operates at 2.5 GHz with a bandwidth of 22 MHz. Measurement results demonstrate a tuning range of 110 MHz while the quality factor is above 374 (1300–374 over the tuning range).",
"title": ""
}
] |
scidocsrr
|
a897dd674fb895a7bc3486189fe00400
|
Locality Preserving Projections
|
[
{
"docid": "d02a1619b53ba42e1dbc1fa7a2c65da8",
"text": "The 117 manuscripts submitted for the Hypertext '91 conference were assigned to members of the review committee, using a variety of automated methods based on information retrieval principles and Latent Semantic Indexing. Fifteen reviewers provided exhaustive ratings for the submitted abstracts, indicating how well each abstract matched their interests. The automated methods do a fairly good job of assigning relevant papers for review, but they are still somewhat poorer than assignments made manually by human experts and substantially poorer than an assignment perfectly matching the reviewers' own ranking of the papers. A new automated assignment method called “n of 2n” achieves better performance than human experts by sending reviewers more papers than they actually have to review and then allowing them to choose part of their review load themselves.",
"title": ""
}
] |
[
{
"docid": "43fe2c4898a643be10928e8f677a59ef",
"text": "When people want to move to a new job, it is often difficult since there is too much job information available. To select an appropriate job and then submit a resume is tedious. It is particularly difficult for university students since they normally do not have any work experience and also are unfamiliar with the job market. To deal with the information overload for students during their transition into work, a job recommendation system can be very valuable. In this research, after fully investigating the pros and cons of current job recommendation systems for university students, we propose a student profiling based re-ranking framework. In this system, the students are recommended a list of potential jobs based on those who have graduated and obtained job offers over the past few years. Furthermore, recommended employers are also used as input for job recommendation result re-ranking. Our experimental study on real recruitment data over the past four years has shown this method’s potential.",
"title": ""
},
{
"docid": "5077d46e909db94b510da0621fcf3a9e",
"text": "This paper presents a high SNR self-capacitance sensing 3D hover sensor that does not use panel offset cancelation blocks. Not only reducing noise components, but increasing the signal components together, this paper achieved a high SNR performance while consuming very low power and die-area. Thanks to the proposed separated structure between driving and sensing circuits of the self-capacitance sensing scheme (SCSS), the signal components are increased without using high-voltage MOS sensing amplifiers which consume big die-area and power and badly degrade SNR. In addition, since a huge panel offset problem in SCSS is solved exploiting the panel's natural characteristics, other costly resources are not required. Furthermore, display noise and parasitic capacitance mismatch errors are compressed. We demonstrate a 39dB SNR at a 1cm hover point under 240Hz scan rate condition with noise experiments, while consuming 183uW/electrode and 0.73mm2/sensor, which are the power per electrode and the die-area per sensor, respectively.",
"title": ""
},
{
"docid": "86069ba30042606be2a50780b81ce5d8",
"text": "This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.",
"title": ""
},
{
"docid": "dc66c80a5031c203c41c7b2908c941a3",
"text": "There has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in cross-project prediction: using data from one project to predict defects in another. Sadly, results in this area have largely been disheartening. Most experiments in cross-project defect prediction report poor performance, using the standard measures of precision, recall and F-score. We argue that these IR-based measures, while broadly applicable, are not as well suited for the quality-control settings in which defect prediction models are used. Specifically, these measures are taken at specific threshold settings (typically thresholds of the predicted probability of defectiveness returned by a logistic regression model). However, in practice, software quality control processes choose from a range of time-and-cost vs quality tradeoffs: how many files shall we test? how many shall we inspect? Thus, we argue that measures based on a variety of tradeoffs, viz., 5%, 10% or 20% of files tested/inspected would be more suitable. We study cross-project defect prediction from this perspective. We find that cross-project prediction performance is no worse than within-project performance, and substantially better than random prediction!",
"title": ""
},
{
"docid": "11068c7b8ce924c7d83736f23475c30a",
"text": "Both oxytocin and serotonin modulate affiliative responses to partners and offspring. Animal studies suggest a crucial role of oxytocin in mammalian parturition and lactation but also in parenting and social interactions with offspring. The serotonergic system may also be important through its influence on mood and the release of oxytocin. We examined the role of serotonin transporter (5-HTT) and oxytocin receptor (OXTR) genes in explaining differences in sensitive parenting in a community sample of 159 Caucasian, middle-class mothers with their 2-year-old toddlers at risk for externalizing behavior problems, taking into account maternal educational level, maternal depression and the quality of the marital relationship. Independent genetic effects of 5-HTTLPR SCL6A4 and OXTR rs53576 on observed maternal sensitivity were found. Controlling for differences in maternal education, depression and marital discord, parents with the possibly less efficient variants of the serotonergic (5-HTT ss) and oxytonergic (AA/AG) system genes showed lower levels of sensitive responsiveness to their toddlers. Two-way and three-way interactions with marital discord or depression were not significant. This first study on the role of both OXTR and 5-HTT genes in human parenting points to molecular genetic differences that may be implicated in the production of oxytocin explaining differences in sensitive parenting.",
"title": ""
},
{
"docid": "27d5fc9c0a082719c44ab73a24e890cc",
"text": "Using a cross-sectional survey of a random sample of 7,945 college undergraduates, we report on the association between having received Green Dot active bystander behavior training and the frequency of actual and observed self-reported active bystander behaviors as well as violence acceptance norms. Of 2,504 students aged 18 to 26 who completed the survey, 46% had heard a Green Dot speech on campus, and 14% had received active bystander training during the past 2 years. Trained students had significantly lower rape myth acceptance scores than did students with no training. Trained students also reported engaging in significantly more bystander behaviors and observing more self-reported active bystander behaviors when compared with nontrained students. When comparing self-reported active bystander behavior scores of students trained with students hearing a Green Dot speech alone, the training was associated with significantly higher active bystander behavior scores. Those receiving bystander training appeared to report more active bystander behaviors than those simply hearing a Green Dot speech, and both intervention groups reported more observed and active bystander behaviors than nonexposed students.",
"title": ""
},
{
"docid": "67a9c0bda15ac57332076ba99ab4cf75",
"text": "An association between limb-girdle muscular dystrophy and autoimmune polyglandular syndrome type 1 (APS1), in three sisters born to consanguineous parents, is presented. The components of APS1 in these patients were hypoparathyroidism, autoimmune adrenal insufficiency, primary hypogonadism and mucocutaneous candidiasis. A muscle biopsy performed on the first patient showed over 40 % of trabeculated fibers, suggesting the diagnosis of myopathy with trabeculated fibers (MTF). Intracranial calcification was found in the second patient; and epilepsy, and several other minor components of APS1, in the third; cataracts were found in the last two patients. The clinical manifestations and inheritance of MTF and APS1 are reviewed. While recessive mutations in the AIRE gene (21q22.3) cause APS1, genetic transmission of hereditary MTF has not been investigated in depth. Mutations in CRYAA, a gene that shares the same locus as AIRE, may cause recessive inheritance of cataracts. Thus, the proposal of this article is that linkage of contiguous genes that includes the AIRE gene, might be responsible for the association of both diseases in these three patients. Additional involvement of CRYAA, that possibly causes cataracts in two of the patients, might support this hypothesis, due to the proximity of this gene to AIRE. The genes COL6A1 and COL6A2, localized in 21q22.3, are discarded as transmitters of MTF in these cases, on clinical criteria. The authors wish to draw attention to the association between limb-girdle muscular dystrophy and APS1, since it has been very rarely reported in the medical literature.",
"title": ""
},
{
"docid": "141e40234b9080d8f90b3fabb7d31ff9",
"text": "Recommender systems solve the problem of information overload by efficiently utilizing huge quantities of data and trying its best to predict potential preference which aims at a certain user. They are widely applied in numerous fields. However, hardly can we see a code recommender system for programmers though it is desperately expected. Raw data of the code are not so convenient to handle for the difference in structure and lack of relevance. Fortunately, in real world, there are abundant data affiliated to the code, such as context, tags, social relations of users and view histories. In this paper, we firstly formulate a new task of code recommendation. Then, we propose a hybrid linear algorithm for recommending source codes, in which we maximize the utility of multivariate heterogeneous auxiliary data with code. Experiments on the dataset from Code Review Community show that our proposed method works for the new code recommendation task. Our system is hopefully designed to be adaptive to new source of heterogeneous information, and hopefully performs better with more significant data and new inspired components.",
"title": ""
},
{
"docid": "d0f71092df2eab53e7f32eff1cb7af2e",
"text": "Topic modeling of textual corpora is an important and challenging problem. In most previous work, the “bag-of-words” assumption is usually made which ignores the ordering of words. This assumption simplifies the computation, but it unrealistically loses the ordering information and the semantic of words in the context. In this paper, we present a Gaussian Mixture Neural Topic Model (GMNTM) which incorporates both the ordering of words and the semantic meaning of sentences into topic modeling. Specifically, we represent each topic as a cluster of multi-dimensional vectors and embed the corpus into a collection of vectors generated by the Gaussian mixture model. Each word is affected not only by its topic, but also by the embedding vector of its surrounding words and the context. The Gaussian mixture components and the topic of documents, sentences and words can be learnt jointly. Extensive experiments show that our model can learn better topics and more accurate word distributions for each topic. Quantitatively, comparing to state-of-the-art topic modeling approaches, GMNTM obtains significantly better performance in terms of perplexity, retrieval accuracy and classification accuracy.",
"title": ""
},
{
"docid": "d0dafdd3a949c0a9725ad6037c16f32b",
"text": "KNN and SVM are two machine learning approaches to Text Categorization (TC) based on the Vector Space Model. In this model, borrowed from Information Retrieval, documents are represented as a vector where each component is associated with a particular word from the vocabulary. Traditionally, each component value is assigned using the information retrieval TFIDF measure. While this weighting method seems very appropriate for IR, it is not clear that it is the best choice for TC problems. Actually, this weighting method does not leverage the information implicitly contained in the categorization task to represent documents. In this paper, we introduce a new weighting method based on statistical estimation of the importance of a word for a specific categorization problem. This method also has the benefit to make feature selection implicit, since useless features for the categorization problem considered get a very small weight. Extensive experiments reported in the paper shows that this new weighting method improves significantly the classification accuracy as measured on many categorization tasks.",
"title": ""
},
{
"docid": "5a4c9b6626d2d740246433972ad60f16",
"text": "We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. Our findings can be summarized as follows:",
"title": ""
},
{
"docid": "16a3bf4df6fb8e61efad6f053f1c6f9c",
"text": "The objective of this paper is to improve large scale visual object retrieval for visual place recognition. Geo-localization based on a visual query is made difficult by plenty of non-distinctive features which commonly occur in imagery of urban environments, such as generic modern windows, doors, cars, trees, etc. The focus of this work is to adapt standard Hamming Embedding retrieval system to account for varying descriptor distinctiveness. To this end, we propose a novel method for efficiently estimating distinctiveness of all database descriptors, based on estimating local descriptor density everywhere in the descriptor space. In contrast to all competing methods, the (unsupervised) training time for our method (DisLoc) is linear in the number database descriptors and takes only a 100 seconds on a single CPU core for a 1 million image database. Furthermore, the added memory requirements are negligible (1%). The method is evaluated on standard publicly available large-scale place recognition benchmarks containing street-view imagery of Pittsburgh and San Francisco. DisLoc is shown to outperform all baselines, while setting the new state-of-the-art on both benchmarks. The method is compatible with spatial reranking, which further improves recognition results. Finally, we also demonstrate that 7% of the least distinctive features can be removed, therefore reducing storage requirements and improving retrieval speed, without any loss in place recognition accuracy.",
"title": ""
},
{
"docid": "55a6353fa46146d89c7acd65bee237b5",
"text": "The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\\%) and an acceptable false positive rate (5.15\\%) for a vetting purpose.",
"title": ""
},
{
"docid": "f005ebceeac067ffae197fee603ed8c7",
"text": "The extended Kalman filter (EKF) is one of the most widely used methods for state estimation with communication and aerospace applications based on its apparent simplicity and tractability (Shi et al., 2002; Bolognani et al., 2003; Wu et al., 2004). However, for an EKF to guarantee satisfactory performance, the system model should be known exactly. Unknown external disturbances may result in the inaccuracy of the state estimate, even cause divergence. This difficulty has been recognized in the literature (Reif & Unbehauen, 1999; Reif et al., 2000), and several schemes have been developed to overcome it. A traditional approach to improve the performance of the filter is the 'covariance setting' technique, where a positive definite estimation error covariance matrix is chosen by the filter designer (Einicke et al., 2003; Bolognani et al., 2003). As it is difficult to manually tune the covariance matrix for dynamic system, adaptive extended Kalman filter (AEKF) approaches for online estimation of the covariance matrix have been adopted (Kim & ILTIS, 2004; Yu et al., 2005; Ahn & Won, 2006). However, only in some special cases, the optimal estimation of the covariance matrix can be obtained. And inaccurate approximation of the covariance matrix may blur the state estimate. Recently, the robust H∞ filter has received considerable attention (Theodor et al., 1994; Shen & Deng, 1999; Zhang et al., 2005; Tseng & Chen, 2001). The robust filters take different forms depending on what kind of disturbances are accounted for, while the general performance criterion of the filters is to guarantee a bounded energy gain from the worst possible disturbance to the estimation error. Although the robust extended Kalman filter (REKF) has been deeply investigated (Einicke & White, 1999; Reif et al., 1999; Seo et al., 2006), how to prescribe the level of disturbances attenuation is still an open problem. In general, the selection of the attenuation level can be seen as a tradeoff between the optimality and the robustness. In other words, the robustness of the REKF is obtained at the expense of optimality. This chapter reviews the adaptive robust extended Kalman filter (AREKF), an effective algorithm which will remain stable in the presence of unknown disturbances, and yield accurate estimates in the absence of disturbances (Xiong et al., 2008). The key idea of the AREKF is to design the estimator based on the stability analysis, and determine whether the error covariance matrix should be reset according to the magnitude of the innovation. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg",
"title": ""
},
{
"docid": "0b8285c090fd6b725b3b04af9195c4fd",
"text": "We present a simple algorithm for computing a high-quality personalized avatar from a single color image and the corresponding depth map which have been captured by Microsoft’s Kinect sensor. Due to the low market price of our hardware setup, 3D face scanning becomes feasible for home use. The proposed algorithm combines the advantages of robust non-rigid registration and fitting of a morphable face model. We obtain a high-quality reconstruction of the facial geometry and texture along with one-to-one correspondences with our generic face model. This representation allows for a wide range of further applications such as facial animation or manipulation. Our algorithm has proven to be very robust. Since it does not require any user interaction, even non-expert users can easily create their own personalized avatars. Copyright # 2011 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "09cffaca68a254f591187776e911d36e",
"text": "Signaling across cellular membranes, the 826 human G protein-coupled receptors (GPCRs) govern a wide range of vital physiological processes, making GPCRs prominent drug targets. X-ray crystallography provided GPCR molecular architectures, which also revealed the need for additional structural dynamics data to support drug development. Here, nuclear magnetic resonance (NMR) spectroscopy with the wild-type-like A2A adenosine receptor (A2AAR) in solution provides a comprehensive characterization of signaling-related structural dynamics. All six tryptophan indole and eight glycine backbone 15N-1H NMR signals in A2AAR were individually assigned. These NMR probes provided insight into the role of Asp522.50 as an allosteric link between the orthosteric drug binding site and the intracellular signaling surface, revealing strong interactions with the toggle switch Trp 2466.48, and delineated the structural response to variable efficacy of bound drugs across A2AAR. The present data support GPCR signaling based on dynamic interactions between two semi-independent subdomains connected by an allosteric switch at Asp522.50.",
"title": ""
},
{
"docid": "1aae1983cd48f1f0b8b6f2b165334d5b",
"text": "Many modern highly scalable scientific simulations packages rely on small matrix multiplications as their main computational engine. Math libraries or compilers are unlikely to provide the best possible kernel performance. To address this issue, we present a library which provides high performance small matrix multiplications targeting all recent x86 vector instruction set extensions up to Intel AVX-512. Our evaluation proves that speed-ups of more than 10× are possible depending on the CPU and application. These speed-ups are achieved by a combination of several novel technologies. We use a code generator which has a built-in architectural model to create code which runs well without requiring an auto-tuning phase. Since such code is very specialized we leverage just-in-time compilation to only build the required kernel variant at runtime. To keep ease-of-use, overhead, and kernel management under control we accompany our library with a BLAS-compliant frontend which features a multi-level code-cache hierarchy.",
"title": ""
},
{
"docid": "120e534cada76f9cb61b0bf64d9792de",
"text": "Smartphone advertisement is increasingly used among many applications and allows developers to obtain revenue through in-app advertising. Our study aims at identifying potential security risks of mobile-based advertising services where advertisers are charged for their advertisements on mobile applications. In the Android platform, we particularly implement bot programs that can massively generate click events on advertisements on mobile applications and test their feasibility with eight popular advertising networks. Our experimental results show that six advertising networks (75%) out of eight are vulnerable to our attacks. To mitigate click fraud attacks, we suggest three possible defense mechanisms: (1) filtering out program-generated touch events; (2) identifying click fraud attacks with faked advertisement banners; and (3) detecting anomalous behaviors generated by click fraud attacks. We also discuss why few companies were only willing to deploy such defense mechanisms by examining economic misincentives on the mobile advertising industry.",
"title": ""
},
{
"docid": "53fcd73bc4ce14af8ad5972e4d5d9dcf",
"text": "In this letter, the five-level packed U-cell (PUC5) inverter is reconfigured with two identical dc links operating as an active power filter (APF). Generally, the peak voltage of an APF should be greater than the ac voltage at the point-of-common coupling (PCC) to ensure the boost operation of the converter in order to inject harmonic current into the system effectively; therefore, full compensation can be obtained. The proposed modified PUC5 (MPUC5) converter has two equally regulated separated dc links, which can operate at no load condition useful for APF application. Those divided dc terminals amplitudes are added at the input of the MPUC5 converter to generate a boosted voltage that is higher than the PCC voltage. Consequently, the reduced dc-links voltages are achieved since they do not individually need to be higher than the PCC voltage due to the mentioned fact that their summation has to be higher than PCC voltage. The voltage balancing unit is integrated into the modulation technique to be decoupled from the APF controller. The proposed APF is practically tested to validate its good dynamic performance in harmonic elimination, ac-side power factor correction, reactive power compensation, and power quality improvement.",
"title": ""
},
{
"docid": "8115fddcf7bd64ad0976619f0a51e5a8",
"text": "Current research in content-based semantic image understanding is largely confined to exemplar-based approaches built on low-level feature extraction and classification. The ability to extract both low-level and semantic features and perform knowledge integration of different types of features is expected to raise semantic image understanding to a new level. Belief networks, or Bayesian networks (BN), have proven to be an effective knowledge representation and inference engine in artificial intelligence and expert systems research. Their effectiveness is due to the ability to explicitly integrate domain knowledge in the network structure and to reduce a joint probability distribution to conditional independence relationships. In this paper, we present a general-purpose knowledge integration framework that employs BN in integrating both low-level and semantic features. The efficacy of this framework is demonstrated via three applications involving semantic understanding of pictorial images. The first application aims at detecting main photographic subjects in an image, the second aims at selecting the most appealing image in an event, and the third aims at classifying images into indoor or outdoor scenes. With these diverse examples, we demonstrate that effective inference engines can be built within this powerful and flexible framework according to specific domain knowledge and available training data to solve inherently uncertain vision problems. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
3522f6f9a5740a1562e42366aa734fe0
|
Routing betweenness centrality
|
[
{
"docid": "e054c2d3b52441eaf801e7d2dd54dce9",
"text": "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node’s involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman’s 1979 categorization. At a more substantive level, measures of centrality summarize a node’s involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities/cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the onedimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "d041b33794a14d07b68b907d38f29181",
"text": "This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called \"Constant Load\" and \"Constant Number of Records\", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.",
"title": ""
},
{
"docid": "801a197f630189ab0a9b79d3cbfe904b",
"text": "Historically, Vivaldi arrays are known to suffer from high cross-polarization when scanning in the nonprincipal planes—a fault without a universal solution. In this paper, a solution to this issue is proposed in the form of a new Vivaldi-type array with low cross-polarization termed the Sliced Notch Antenna (SNA) array. For the first proof-of-concept demonstration, simulations and measurements are comparatively presented for two single-polarized <inline-formula> <tex-math notation=\"LaTeX\">$19 \\times 19$ </tex-math></inline-formula> arrays—the proposed SNA and its Vivaldi counterpart—each operating over a 1.2–12 GHz (10:1) band. Both arrays are built using typical vertically integrated printed-circuit board cards, and are designed to exhibit VSWR < 2.5 within a 60° scan cone over most of the 10:1 band as infinite arrays. Measurement results compare very favorably with full-wave finite array simulations that include array truncation effects. The SNA array element demonstrates well-behaved polarization performance versus frequency, with more than 20 dB of D-plane <inline-formula> <tex-math notation=\"LaTeX\">$\\theta \\!=\\!45 {^{\\circ }}$ </tex-math></inline-formula> polarization purity improvement at the high frequency. Moreover, the SNA element also: 1) offers better suppression of classical Vivaldi E-plane scan blindnesses; 2) requires fewer plated through vias for stripline-based designs; and 3) allows relaxed adjacent element electrical contact requirements for dual-polarized arrangements.",
"title": ""
},
{
"docid": "53aa1145047cc06a1c401b04896ff1b1",
"text": "Due to the increasing availability of whole slide scanners facilitating digitization of histopathological tissue, there is a strong demand for the development of computer based image analysis systems. In this work, the focus is on the segmentation of the glomeruli constituting a highly relevant structure in renal histopathology, which has not been investigated before in combination with CNNs. We propose two different CNN cascades for segmentation applications with sparse objects. These approaches are applied to the problem of glomerulus segmentation and compared with conventional fully-convolutional networks. Overall, with the best performing cascade approach, single CNNs are outperformed and a pixel-level Dice similarity coefficient of 0.90 is obtained. Combined with qualitative and further object-level analyses the obtained results are assessed as excellent also compared to recent approaches. In conclusion, we can state that especially one of the proposed cascade networks proved to be a highly powerful tool for segmenting the renal glomeruli providing best segmentation accuracies and also keeping the computing time at a low level.",
"title": ""
},
{
"docid": "e31fd6ce6b78a238548e802d21b05590",
"text": "Machine learning techniques have long been used for various purposes in software engineering. This paper provides a brief overview of the state of the art and reports on a number of novel applications I was involved with in the area of software testing. Reflecting on this personal experience, I draw lessons learned and argue that more research should be performed in that direction as machine learning has the potential to significantly help in addressing some of the long-standing software testing problems.",
"title": ""
},
{
"docid": "e2535e6887760b20a18c25385c2926ef",
"text": "The rapid growth in demands for computing everywhere has made computer a pivotal component of human mankind daily lives. Whether we use the computers to gather information from the Web, to utilize them for entertainment purposes or to use them for running businesses, computers are noticeably becoming more widespread, mobile and smaller in size. What we often overlook and did not notice is the presence of those billions of small pervasive computing devices around us which provide the intelligence being integrated into the real world. These pervasive computing devices can help to solve some crucial problems in the activities of our daily lives. Take for examples, in the military application, a large quantity of the pervasive computing devices could be deployed over a battlefield to detect enemy intrusion instead of manually deploying the landmines for battlefield surveillance and intrusion detection Chong et al. (2003). Additionally, in structural health monitoring, these pervasive computing devices are also used to detect for any damage in buildings, bridges, ships and aircraft Kurata et al. (2006). To achieve this vision of pervasive computing, also known as ubiquitous computing, many computational devices are integrated in everyday objects and activities to enable better humancomputer interaction. These computational devices are generally equipped with sensing, processing and communicating abilities and these devices are known as wireless sensor nodes. When several wireless sensor nodes are meshed together, they form a network called the Wireless Sensor Network (WSN). Sensor nodes arranged in network form will definitely exhibit more and better characteristics than individual sensor nodes. WSN is one of the popular examples of ubiquitous computing as it represents a new generation of real-time embedded system which offers distinctly attractive enabling technologies for pervasive computing environments. Unlike the conventional networked systems like Wireless Local Area Network (WLAN) and Global System for Mobile communications (GSM), WSN promise to couple end users directly to sensor measurements and provide information that is precisely localized in time and/or space, according to the users’ needs or demands. In the Massachusetts Institute of Technology (MIT) technology review magazine of innovation published in February 2003 MIT (2003), the editors have identified Wireless Sensor Networks as the first of the top ten emerging technologies that will change the world. This explains why WSN has swiftly become a hot research topic in both academic and industry. 2",
"title": ""
},
{
"docid": "958fea977cf31ddabd291da68754367d",
"text": "Recently, learning based hashing techniques have attracted broad research interests because they can support efficient storage and retrieval for high-dimensional data such as images, videos, documents, etc. However, a major difficulty of learning to hash lies in handling the discrete constraints imposed on the pursued hash codes, which typically makes hash optimizations very challenging (NP-hard in general). In this work, we propose a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification. By introducing an auxiliary variable, we reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. One of the key steps in this algorithm is to solve a regularization sub-problem associated with the NP-hard binary optimization. We show that the sub-problem admits an analytical solution via cyclic coordinate descent. As such, a high-quality discrete solution can eventually be obtained in an efficient computing manner, therefore enabling to tackle massive datasets. We evaluate the proposed approach, dubbed Supervised Discrete Hashing (SDH), on four large image datasets and demonstrate its superiority to the state-of-the-art hashing methods in large-scale image retrieval.",
"title": ""
},
{
"docid": "2e7ee3674bdd58967380a59d638b2b17",
"text": "Media applications are characterized by large amounts of available parallelism, little data reuse, and a high computation to memory access ratio. While these characteristics are poorly matched to conventional microprocessor architectures, they are a good fit for modern VLSI technology with its high arithmetic capacity but limited global bandwidth. The stream programming model, in which an application is coded as streams of data records passing through computation kernels, exposes both parallelism and locality in media applications that can be exploited by VLSI architectures. The Imagine architecture supports the stream programming model by providing a bandwidth hierarchy tailored to the demands of media applications. Compared to a conventional scalar processor, Imagine reduces the global register and memory bandwidth required by typical applications by factors of 13 and 21 respectively. This bandwidth efficiency enables a single chip Imagine processor to achieve a peak performance of 16.2GFLOPS (single-precision floating point) and sustained performance of up to 8.5GFLOPS on media processing kernels.",
"title": ""
},
{
"docid": "f54631ac73d42af0ccb2811d483fe8c2",
"text": "Understanding large, structured documents like scholarly articles, requests for proposals or business reports is a complex and difficult task. It involves discovering a document’s overall purpose and subject(s), understanding the function and meaning of its sections and subsections, and extracting low level entities and facts about them. In this research, we present a deep learning based document ontology to capture the general purpose semantic structure and domain specific semantic concepts from a large number of academic articles and business documents. The ontology is able to describe different functional parts of a document, which can be used to enhance semantic indexing for a better understanding by human beings and machines. We evaluate our models through extensive experiments on datasets of scholarly articles from arxiv and Request for Proposal documents.",
"title": ""
},
{
"docid": "3038ec4ac3d648a4ec052b8d7f854107",
"text": "Anomalous data can negatively impact energy forecasting by causing model parameters to be incorrectly estimated. This paper presents two approaches for the detection and imputation of anomalies in time series data. Autoregressive with exogenous inputs (ARX) and artificial neural network (ANN) models are used to extract the characteristics of time series. Anomalies are detected by performing hypothesis testing on the extrema of the residuals, and the anomalous data points are imputed using the ARX and ANN models. Because the anomalies affect the model coefficients, the data cleaning process is performed iteratively. The models are re-learned on “cleaner” data after an anomaly is imputed. The anomalous data are reimputed to each iteration using the updated ARX and ANN models. The ARX and ANN data cleaning models are evaluated on natural gas time series data. This paper demonstrates that the proposed approaches are able to identify and impute anomalous data points. Forecasting models learned on the unclean data and the cleaned data are tested on an uncleaned out-of-sample dataset. The forecasting model learned on the cleaned data outperforms the model learned on the unclean data with 1.67% improvement in the mean absolute percentage errors and a 32.8% improvement in the root mean squared error. Existing challenges include correctly identifying specific types of anomalies such as negative flows.",
"title": ""
},
{
"docid": "43685bd1927f309c8b9a5edf980ab53f",
"text": "In this paper we propose a pipeline for accurate 3D reconstruction from multiple images that deals with some of the possible sources of inaccuracy present in the input data. Namely, we address the problem of inaccurate camera calibration by including a method [1] adjusting the camera parameters in a global structure-and-motion problem which is solved with a depth map representation that is suitable to large scenes. Secondly, we take the triangular mesh and calibration improved by the global method in the first phase to refine the surface both geometrically and radiometrically. Here we propose surface energy which combines photo consistency with contour matching and minimize it with a gradient method. Our main contribution lies in effective computation of the gradient that naturally balances weight between regularizing and data terms by employing scale space approach to find the correct local minimum. The results are demonstrated on standard high-resolution datasets and a complex outdoor scene.",
"title": ""
},
{
"docid": "3eeacf0fb315910975e5ff0ffc4fe800",
"text": "Social networks are rich in various kinds of contents such as text and multimedia. The ability to apply text mining algorithms effectively in the context of text data is critical for a wide variety of applications. Social networks require text mining algorithms for a wide variety of applications such as keyword search, classi cation, and clustering. While search and classi cation are well known applications for a wide variety of scenarios, social networks have a much richer structure both in terms of text and links. Much of the work in the area uses either purely the text content or purely the linkage structure. However, many recent algorithms use a combination of linkage and content information for mining purposes. In many cases, it turns out that the use of a combination of linkage and content information provides much more effective results than a system which is based purely on either of the two. This paper provides a survey of such algorithms, and the advantages observed by using such algorithms in different scenarios. We also present avenues for future research in this area.",
"title": ""
},
{
"docid": "772193675598233ba1ab60936b3091d4",
"text": "The proposed quasiresonant control scheme can be widely used in a dc-dc flyback converter because it can achieve high efficiency with minimized external components. The proposed dynamic frequency selector improves conversion efficiency especially at light loads to meet the requirement of green power since the converter automatically switches to the discontinuous conduction mode for reducing the switching frequency and the switching power loss. Furthermore, low quiescent current can be guaranteed by the constant current startup circuit to further reduce power loss after the startup procedure. The test chip fabricated in VIS 0.5 μm 500 V UHV process occupies an active silicon area of 3.6 mm 2. The peak efficiency can achieve 92% at load of 80 W and 85% efficiency at light load of 5 W.",
"title": ""
},
{
"docid": "2fa61482be37fd956e6eceb8e517411d",
"text": "According to analysis reports on road accidents of recent years, it's renowned that the main cause of road accidents resulting in deaths, severe injuries and monetary losses, is due to a drowsy or a sleepy driver. Drowsy state may be caused by lack of sleep, medication, drugs or driving continuously for long time period. An increase rate of roadside accidents caused due to drowsiness during driving indicates a need of a system that detects such state of a driver and alerts him prior to the occurrence of any accident. During the recent years, many researchers have shown interest in drowsiness detection. Their approaches basically monitor either physiological or behavioral characteristics related to the driver or the measures related to the vehicle being used. A literature survey summarizing some of the recent techniques proposed in this area is provided. To deal with this problem we propose an eye blink monitoring algorithm that uses eye feature points to determine the open or closed state of the eye and activate an alarm if the driver is drowsy. Detailed experimental findings are also presented to highlight the strengths and weaknesses of our technique. An accuracy of 94% has been recorded for the proposed methodology.",
"title": ""
},
{
"docid": "2049ad444e14db330e2256ce412a19f8",
"text": "1 of 11 08/06/07 18:23 Original: http://thebirdman.org/Index/Others/Others-Doc-Environment&Ecology/ +Doc-Environment&Ecology-FoodMatters/StimulatingPlantGrowthWithElectricity&Magnetism&Sound.htm 2007-08-06 Link here: http://blog.lege.net/content/StimulatingPlantGrowthWithElectricityMagnetismSound.html PDF \"printout\": http://blog.lege.net/content/StimulatingPlantGrowthWithElectricityMagnetismSound.pdf",
"title": ""
},
{
"docid": "af08fa19de97eed61afd28893692e7ec",
"text": "OpenACC is a new accelerator programming interface that provides a set of OpenMP-like loop directives for the programming of accelerators in an implicit and portable way. It allows the programmer to express the offloading of data and computations to accelerators, such that the porting process for legacy CPU-based applications can be significantly simplified. This paper focuses on the performance aspects of OpenACC using two micro benchmarks and one real-world computational fluid dynamics application. Both evaluations show that in general OpenACC performance is approximately 50\\% lower than CUDA. However, for some applications it can reach up to 98\\% with careful manual optimizations. The results also indicate several limitations of the OpenACC specification that hamper full use of the GPU hardware resources, resulting in a significant performance gap when compared to a fully tuned CUDA code. The lack of a programming interface for the shared memory in particular results in as much as three times lower performance.",
"title": ""
},
{
"docid": "c4043bfa8cfd74f991ac13ce1edd5bf5",
"text": "Citations between scientific papers and related bibliometric indices, such as the h-index for authors and the impact factor for journals, are being increasingly used – often in controversial ways – as quantitative tools for research evaluation. Yet, a fundamental research question remains still open: to which extent do quantitative metrics capture the significance of scientific works? We analyze the network of citations among the 449, 935 papers published by the American Physical Society (APS) journals between 1893 and 2009, and focus on the comparison of metrics built on the citation count with network-based metrics. We contrast five article-level metrics with respect to the rankings that they assign to a set of fundamental papers, called Milestone Letters, carefully selected by the APS editors for “making long-lived contributions to physics, either by announcing significant discoveries, or by initiating new areas of research”. A new metric, which combines PageRank centrality with the explicit requirement that paper score is not biased by paper age, is the best-performing metric overall in identifying the Milestone Letters. The lack of time bias in the new metric makes it also possible to use it to compare papers of different age on the same scale. We find that networkbased metrics identify the Milestone Letters better than metrics based on the citation count, which suggests that the structure of the citation network contains information that can be used to improve the ranking of scientific publications. The methods and results presented here are relevant for all evolving systems where network centrality metrics are applied, for example the World Wide Web and online social networks.",
"title": ""
},
{
"docid": "54130e2dd3a202935facdad39c04d914",
"text": "Cross modal face matching between the thermal and visible spectrum is a much desired capability for night-time surveillance and security applications. Due to a very large modality gap, thermal-to-visible face recognition is one of the most challenging face matching problem. In this paper, we present an approach to bridge this modality gap by a significant margin. Our approach captures the highly non-linear relationship between the two modalities by using a deep neural network. Our model attempts to learn a non-linear mapping from visible to thermal spectrum while preserving the identity information. We show substantive performance improvement on a difficult thermal-visible face dataset (UND-X1). The presented approach improves the state-of-the-art by more than 10% in terms of Rank-1 identification and bridge the drop in performance due to the modality gap by more than 40%. The goal of training the deep network is to learn the projections that can be used to bring the two modalities together. Typically, this would mean regressing the representation from one modality towards the other. We construct a deep network comprising N +1 layers with m(k) units in the k-th layer, where k = 1,2, · · · ,N. For an input of x ∈Rd , each layer will output a non-linear projection by using the learned projection matrix W and the non-linear activation function g(·). The output of the k-th hidden layer is h(k) = g(W(k)h(k−1) + b(k)), where W(k) ∈ Rm×m(k−1) is the projection matrix to be learned in that layer, b(k) ∈Rm is a bias vector and g : Rm 7→ Rm is the non-linear activation function. Similarly, the output of the most top level hidden layer can be computed as:",
"title": ""
},
{
"docid": "76e62af2971de3d11d684f1dd7100475",
"text": "Recent advances in memory research suggest methods that can be applied to enhance educational practices. We outline four principles of memory improvement that have emerged from research: 1) process material actively, 2) practice retrieval, 3) use distributed practice, and 4) use metamemory. Our discussion of each principle describes current experimental research underlying the principle and explains how people can take advantage of the principle to improve their learning. The techniques that we suggest are designed to increase efficiency—that is, to allow a person to learn more, in the same unit of study time, than someone using less efficient memory strategies. A common thread uniting all four principles is that people learn best when they are active participants in their own learning.",
"title": ""
},
{
"docid": "8eab9eab5b3d93e6688337128d647b06",
"text": "Primary triple-negative breast cancers (TNBCs), a tumour type defined by lack of oestrogen receptor, progesterone receptor and ERBB2 gene amplification, represent approximately 16% of all breast cancers. Here we show in 104 TNBC cases that at the time of diagnosis these cancers exhibit a wide and continuous spectrum of genomic evolution, with some having only a handful of coding somatic aberrations in a few pathways, whereas others contain hundreds of coding somatic mutations. High-throughput RNA sequencing (RNA-seq) revealed that only approximately 36% of mutations are expressed. Using deep re-sequencing measurements of allelic abundance for 2,414 somatic mutations, we determine for the first time—to our knowledge—in an epithelial tumour subtype, the relative abundance of clonal frequencies among cases representative of the population. We show that TNBCs vary widely in their clonal frequencies at the time of diagnosis, with the basal subtype of TNBC showing more variation than non-basal TNBC. Although p53 (also known as TP53), PIK3CA and PTEN somatic mutations seem to be clonally dominant compared to other genes, in some tumours their clonal frequencies are incompatible with founder status. Mutations in cytoskeletal, cell shape and motility proteins occurred at lower clonal frequencies, suggesting that they occurred later during tumour progression. Taken together, our results show that understanding the biology and therapeutic responses of patients with TNBC will require the determination of individual tumour clonal genotypes.",
"title": ""
},
{
"docid": "b8fcade88646ef6926e756f92064477b",
"text": "We have developed a stencil routing algorithm for implementing a GPU accelerated A-Buffer, by using a multisample texture to store a vector of fragments per pixel. First, all the fragments are captured per pixel in rasterization order. Second, a fullscreen shader pass sorts the fragments using a bitonic sort. At this point, the sorted fragments can be blended arbitrarily to implement various types of algorithms such as order independent transparency or layered depth image generation. Since we handle only 8 fragments per pass, we developed a method for detecting overflow, so we can do additional passes to capture more fragments.",
"title": ""
}
] |
scidocsrr
|
e69e872948f131f16acf40c2288c7b81
|
Food Hardships and Child Behavior Problems among Low-income Children Food Hardships and Child Behavior Problems among Low-income Children
|
[
{
"docid": "e91f0323df84e4c79e26822a799d54fd",
"text": "Researchers have renewed an interest in the harmful consequences of poverty on child development. This study builds on this work by focusing on one mechanism that links material hardship to child outcomes, namely the mediating effect of maternal depression. Using data from the National Maternal and Infant Health Survey, we found that maternal depression and poverty jeopardized the development of very young boys and girls, and to a certain extent, affluence buffered the deleterious consequences of depression. Results also showed that chronic maternal depression had severe implications for both boys and girls, whereas persistent poverty had a strong effect for the development of girls. The measures of poverty and maternal depression used in this study generally had a greater impact on measures of cognitive development than motor development.",
"title": ""
}
] |
[
{
"docid": "f2a677515866e995ff8e0e90561d7cbc",
"text": "Pattern matching and data abstraction are important concepts in designing programs, but they do not fit well together. Pattern matching depends on making public a free data type representation, while data abstraction depends on hiding the representation. This paper proposes the views mechanism as a means of reconciling this conflict. A view allows any type to be viewed as a free data type, thus combining the clarity of pattern matching with the efficiency of data abstraction.",
"title": ""
},
{
"docid": "ddb70e707b63b30ee8e3b98b43db12a0",
"text": "Taint-style vulnerabilities are a persistent problem in software development, as the recently discovered \"Heart bleed\" vulnerability strikingly illustrates. In this class of vulnerabilities, attacker-controlled data is passed unsanitized from an input source to a sensitive sink. While simple instances of this vulnerability class can be detected automatically, more subtle defects involving data flow across several functions or project-specific APIs are mainly discovered by manual auditing. Different techniques have been proposed to accelerate this process by searching for typical patterns of vulnerable code. However, all of these approaches require a security expert to manually model and specify appropriate patterns in practice. In this paper, we propose a method for automatically inferring search patterns for taint-style vulnerabilities in C code. Given a security-sensitive sink, such as a memory function, our method automatically identifies corresponding source-sink systems and constructs patterns that model the data flow and sanitization in these systems. The inferred patterns are expressed as traversals in a code property graph and enable efficiently searching for unsanitized data flows -- across several functions as well as with project-specific APIs. We demonstrate the efficacy of this approach in different experiments with 5 open-source projects. The inferred search patterns reduce the amount of code to inspect for finding known vulnerabilities by 94.9% and also enable us to uncover 8 previously unknown vulnerabilities.",
"title": ""
},
{
"docid": "78fafa0e14685d317ab88361d0a0dc8c",
"text": "Industry analysts expect volume production of integrated circuits on 300-mm wafers to start in 2001 or 2002. At that time, appropriate production equipment must be available. To meet this need, the MEDEA Project has supported us at ASM Europe in developing an advanced vertical batch furnace system for 300-mm wafers. Vertical furnaces are widely used for many steps in the production of integrated circuits. In volume production, these batch furnaces achieve a lower cost per production step than single-wafer processing methods. Applications for vertical furnaces are extensive, including the processing of low-pressure chemical vapor deposition (LPCVD) layers such as deposited oxides, polysilicon, and nitride. Furthermore, the furnaces can be used for oxidation and annealing treatments. As the complexity of IC technology increases, production equipment must meet the technology guidelines summarized in Table 1 from the Semiconductor Industry Association’s Roadmap. The table shows that the minimal feature size will sharply decrease, and likewise the particle size and level will decrease. The challenge in designing a new generation of furnaces for 300-mm wafers was to improve productivity as measured in throughput (number of wafers processed per hour), clean-room footprint, and capital cost. Therefore, we created a completely new design rather than simply upscaling the existing 200mm equipment.",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "459b07b78f3cbdcbd673881fd000da14",
"text": "The intersubject dependencies of false nonmatch rates were investigated for a minutiae-based biometric authentication process using single enrollment and verification measurements. A large number of genuine comparison scores were subjected to statistical inference tests that indicated that the number of false nonmatches depends on the subject and finger under test. This result was also observed if subjects associated with failures to enroll were excluded from the test set. The majority of the population (about 90%) showed a false nonmatch rate that was considerably smaller than the average false nonmatch rate of the complete population. The remaining 10% could be characterized as “goats” due to their relatively high probability for a false nonmatch. The image quality reported by the template extraction module only weakly correlated with the genuine comparison scores. When multiple verification attempts were investigated, only a limited benefit was observed for “goats,” since the conditional probability for a false nonmatch given earlier nonsuccessful attempts increased with the number of attempts. These observations suggest that (1) there is a need for improved identification of “goats” during enrollment (e.g., using dedicated signal-driven analysis and classification methods and/or the use of multiple enrollment images) and (2) there should be alternative means for identity verification in the biometric system under test in case of two subsequent false nonmatches.",
"title": ""
},
{
"docid": "ad3147f3a633ec8612dc25dfde4a4f0c",
"text": "A half-bridge integrated zero-voltage-switching (ZVS) full-bridge converter with reduced conduction loss for battery on-board chargers in electric vehicles (EVs) or plug-in hybrid electric vehicles (PHEVs) is proposed in this paper. The proposed converter features a reduction in primary-conduction loss and a lower secondary-voltage stress. In addition, the proposed converter has the most favorable characteristics as battery chargers as follows: a full ZVS capability and a significantly reduced output filter size due to the improved output waveform. In this paper, the circuit configuration, operation principle, and relevant analysis results of the proposed converter are described, followed by the experimental results on a prototype converter realized with a scale-downed 2-kW battery charger for EVs or PHEVs. The experimental results validate the theoretical analysis and show the effectiveness of the proposed converter as battery on-board chargers for EVs or PHEVs.",
"title": ""
},
{
"docid": "9304c82e4b19c2f5e23ca45e7f2c9538",
"text": "Previous work has shown that using the GPU as a brute force method for SELECT statements on a SQLite database table yields significant speedups. However, this requires that the entire table be selected and transformed from the B-Tree to row-column format. This paper investigates possible speedups by traversing B+ Trees in parallel on the GPU, avoiding the overhead of selecting the entire table to transform it into row-column format and leveraging the logarithmic nature of tree searches. We experiment with different input sizes, different orders of the B+ Tree, and batch multiple queries together to find optimal speedups for SELECT statements with single search parameters as well as range searches. We additionally make a comparison to a simple GPU brute force algorithm on a row-column version of the B+ Tree.",
"title": ""
},
{
"docid": "f2f2b48cd35d42d7abc6936a56aa580d",
"text": "Complete enumeration of all the sequences to establish global optimality is not feasible as the search space, for a general job-shop scheduling problem, ΠG has an upper bound of (n!). Since the early fifties a great deal of research attention has been focused on solving ΠG, resulting in a wide variety of approaches such as Branch and Bound, Simulated Annealing, Tabu Search, etc. However limited success has been achieved by these methods due to the shear intractability of this generic scheduling problem. Recently, much effort has been concentrated on using neural networks to solve ΠG as they are capable of adapting to new environments with little human intervention and can mimic thought processes. Major contributions in solving ΠG using a Hopfield neural network, as well as applications of back-error propagation to general scheduling problems are presented. To overcome the deficiencies in these applications a modified back-error propagation model, a simple yet powerful parallel architecture which can be successfully simulated on a personal computer, is applied to solve ΠG.",
"title": ""
},
{
"docid": "6e4c0b8625363e9acbe91c149af2c037",
"text": "OBJECTIVE\nThe present study assessed the effect of smoking on clinical, microbiological and immunological parameters in an experimental gingivitis model.\n\n\nMATERIAL AND METHODS\nTwenty-four healthy dental students were divided into two groups: smokers (n = 10); and nonsmokers (n = 14). Stents were used to prevent biofilm removal during brushing. Visible plaque index (VPI) and gingival bleeding index (GBI) were determined 5- on day -7 (running phase), baseline, 21 d (experimental gingivitis) and 28 d (resolution phase). Supragingival biofilm and gingival crevicular fluid were collected and assayed by checkerboard DNA-DNA hybridization and a multiplex analysis, respectively. Intragroup comparison was performed by Friedman and Dunn's multiple comparison tests, whereas the Mann-Whitney U-test was applied for intergroup analyses.\n\n\nRESULTS\nCessation of oral hygiene resulted in a significant increase in VPI, GBI and gingival crevicular fluid volume in both groups, which returned to baseline levels 7 d after oral hygiene was resumed. Smokers presented lower GBI than did nonsmokers (p < 0.05) at day 21. Smokers had higher total bacterial counts and higher proportions of red- and orange complex bacteria, as well as lower proportions of Actinomyces spp., and of purple- and yellow-complex bacteria (p < 0.05). Furthermore, the levels of key immune-regulatory cytokines, including interleukin (IL)-8, IL-17 and interferon-γ, were higher in smokers than in nonsmokers (p < 0.05).\n\n\nCONCLUSION\nSmokers and nonsmokers developed gingival inflammation after supragingival biofilm accumulation, but smokers had less bleeding, higher proportions of periodontal pathogens and distinct host-response patterns during the course of experimental gingivitis.",
"title": ""
},
{
"docid": "567445f68597ea8ff5e89719772819be",
"text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.",
"title": ""
},
{
"docid": "110b0837952be3e0aa01f4859190a116",
"text": "Automatic recommendation has become a popular research field: it allows the user to discover items that match their tastes. In this paper, we proposed an expanded autoencoder recommendation framework. The stacked autoencoders model is employed to extract the feature of input then reconstitution the input to do the recommendation. Then the side information of items and users is blended in the framework and the Huber function based regularization is used to improve the recommendation performance. The proposed recommendation framework is applied on the movie recommendation. Experimental results on a public database in terms of quantitative assessment show significant improvements over conventional methods.",
"title": ""
},
{
"docid": "29a2c5082cf4db4f4dde40f18c88ca85",
"text": "Human astrocytes are larger and more complex than those of infraprimate mammals, suggesting that their role in neural processing has expanded with evolution. To assess the cell-autonomous and species-selective properties of human glia, we engrafted human glial progenitor cells (GPCs) into neonatal immunodeficient mice. Upon maturation, the recipient brains exhibited large numbers and high proportions of both human glial progenitors and astrocytes. The engrafted human glia were gap-junction-coupled to host astroglia, yet retained the size and pleomorphism of hominid astroglia, and propagated Ca2+ signals 3-fold faster than their hosts. Long-term potentiation (LTP) was sharply enhanced in the human glial chimeric mice, as was their learning, as assessed by Barnes maze navigation, object-location memory, and both contextual and tone fear conditioning. Mice allografted with murine GPCs showed no enhancement of either LTP or learning. These findings indicate that human glia differentially enhance both activity-dependent plasticity and learning in mice.",
"title": ""
},
{
"docid": "d4f806a58d4cdc59cae675a765d4c6bc",
"text": "Our study examines whether ownership structure and boardroom characteristics have an effect on corporate financial fraud in China. The data come from the enforcement actions of the Chinese Securities Regulatory Commission (CSRC). The results from univariate analyses, where we compare fraud and nofraud firms, show that ownership and board characteristics are important in explaining fraud. However, using a bivariate probit model with partial observability we demonstrate that boardroom characteristics are important, while the type of owner is less relevant. In particular, the proportion of outside directors, the number of board meetings, and the tenure of the chairman are associated with the incidence of fraud. Our findings have implications for the design of appropriate corporate governance systems for listed firms. Moreover, our results provide information that can inform policy debates within the CSRC. D 2005 Elsevier B.V. All rights reserved. JEL classification: G34",
"title": ""
},
{
"docid": "40a6cc06e0e90fba161bc8bc8ec6446d",
"text": "Toxic comment classification has become an active research field with many recently proposed approaches. However, while these approaches address some of the task’s challenges others still remain unsolved and directions for further research are needed. To this end, we compare different deep learning and shallow approaches on a new, large comment dataset and propose an ensemble that outperforms all individual models. Further, we validate our findings on a second dataset. The results of the ensemble enable us to perform an extensive error analysis, which reveals open challenges for state-of-the-art methods and directions towards pending future research. These challenges include missing paradigmatic context and inconsistent dataset labels.",
"title": ""
},
{
"docid": "35dacb4b15e5c8fbd91cee6da807799a",
"text": "Stochastic gradient algorithms have been the main focus of large-scale learning problems and led to important successes in machine learning. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose a new adaptive learning rate algorithm, which utilizes curvature information for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.",
"title": ""
},
{
"docid": "6ceab65cc9505cf21824e9409cf67944",
"text": "Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on largescale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in terms of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level",
"title": ""
},
{
"docid": "17676785398d4ed24cc04cb3363a7596",
"text": "Generative models (GMs) such as Generative Adversary Network (GAN) and Variational Auto-Encoder (VAE) have thrived these years and achieved high quality results in generating new samples. Especially in Computer Vision, GMs have been used in image inpainting, denoising and completion, which can be treated as the inference from observed pixels to corrupted pixels. However, images are hierarchically structured which are quite different from many real-world inference scenarios with non-hierarchical features. These inference scenarios contain heterogeneous stochastic variables and irregular mutual dependences. Traditionally they are modeled by Bayesian Network (BN). However, the learning and inference of BN model are NP-hard thus the number of stochastic variables in BN is highly constrained. In this paper, we adapt typical GMs to enable heterogeneous learning and inference in polynomial time. We also propose an extended autoregressive (EAR) model and an EAR with adversary loss (EARA) model and give theoretical results on their effectiveness. Experiments on several BN datasets show that our proposed EAR model achieves the best performance in most cases compared to other GMs. Except for black box analysis, we’ve also done a serial of experiments on Markov border inference of GMs for white box analysis and give theoretical results.",
"title": ""
},
{
"docid": "0300e887815610a2f7d26994d027fe78",
"text": "This paper presents a computer vision based method for bar code reading. Bar code's geometric features and the imaging system parameters are jointly extracted from a tilted low resolution bar code image. This approach enables the use of cost effective cameras, increases the depth of acquisition, and provides solutions for cases where image quality is low. The performance of the algorithm is tested on synthetic and real test images, and extension to a 2D bar code (PDF417) is also discussed.",
"title": ""
},
{
"docid": "84b018fa45e06755746309014854bb9a",
"text": "For years, ontologies have been known in computer science as consensual models of domains of discourse, usually implemented as formal definitions of the relevant conceptual entities. Researchers have written much about the potential benefits of using them, and most of us regard ontologies as central building blocks of the semantic Web and other semantic systems. Unfortunately, the number and quality of actual, \"non-toy\" ontologies available on the Web today is remarkably low. This implies that the semantic Web community has yet to build practically useful ontologies for a lot of relevant domains in order to make the semantic Web a reality. Theoretically minded advocates often assume that the lack of ontologies is because the \"stupid business people haven't realized ontologies' enormous benefits.\" As a liberal market economist, the author assumes that humans can generally figure out what's best for their well-being, at least in the long run, and that they act accordingly. In other words, the fact that people haven't yet created as many useful ontologies as the ontology research community would like might indicate either unresolved technical limitations or the existence of sound rationales for why individuals refrain from building them - or both. Indeed, several social and technical difficulties exist that put a brake on developing and eventually constrain the space of possible ontologies",
"title": ""
},
{
"docid": "dd723b23b4a7d702f8d34f15b5c90107",
"text": "Smartphones have become a prominent part of our technology driven world. When it comes to uncovering, analyzing and submitting evidence in today's criminal investigations, mobile phones play a more critical role. Thus, there is a strong need for software tools that can help investigators in the digital forensics field effectively analyze smart phone data to solve crimes.\n This paper will accentuate how digital forensic tools assist investigators in getting data acquisition, particularly messages, from applications on iOS smartphones. In addition, we will lay out the framework how to build a tool for verifying data integrity for any digital forensics tool.",
"title": ""
}
] |
scidocsrr
|
3d64739572b4db24f15ed648fc62cdd5
|
An Empirical Evaluation of Similarity Measures for Time Series Classification
|
[
{
"docid": "ceca5552bcb7a5ebd0b779737bc68275",
"text": "In a way similar to the string-to-string correction problem, we address discrete time series similarity in light of a time-series-to-time-series-correction problem for which the similarity between two time series is measured as the minimum cost sequence of edit operations needed to transform one time series into another. To define the edit operations, we use the paradigm of a graphical editing process and end up with a dynamic programming algorithm that we call time warp edit distance (TWED). TWED is slightly different in form from dynamic time warping (DTW), longest common subsequence (LCSS), or edit distance with real penalty (ERP) algorithms. In particular, it highlights a parameter that controls a kind of stiffness of the elastic measure along the time axis. We show that the similarity provided by TWED is a potentially useful metric in time series retrieval applications since it could benefit from the triangular inequality property to speed up the retrieval process while tuning the parameters of the elastic measure. In that context, a lower bound is derived to link the matching of time series into down sampled representation spaces to the matching into the original space. The empiric quality of the TWED distance is evaluated on a simple classification task. Compared to edit distance, DTW, LCSS, and ERP, TWED has proved to be quite effective on the considered experimental task.",
"title": ""
},
{
"docid": "510a43227819728a77ff0c7fa06fa2d0",
"text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.",
"title": ""
}
] |
[
{
"docid": "1349c5daedd71bdfccaa0ea48b3fd54a",
"text": "OBJECTIVE\nCraniosacral therapy (CST) is an alternative treatment approach, aiming to release restrictions around the spinal cord and brain and subsequently restore body function. A previously conducted systematic review did not obtain valid scientific evidence that CST was beneficial to patients. The aim of this review was to identify and critically evaluate the available literature regarding CST and to determine the clinical benefit of CST in the treatment of patients with a variety of clinical conditions.\n\n\nMETHODS\nComputerised literature searches were performed in Embase/Medline, Medline(®) In-Process, The Cochrane library, CINAHL, and AMED from database start to April 2011. Studies were identified according to pre-defined eligibility criteria. This included studies describing observational or randomised controlled trials (RCTs) in which CST as the only treatment method was used, and studies published in the English language. The methodological quality of the trials was assessed using the Downs and Black checklist.\n\n\nRESULTS\nOnly seven studies met the inclusion criteria, of which three studies were RCTs and four were of observational study design. Positive clinical outcomes were reported for pain reduction and improvement in general well-being of patients. Methodological Downs and Black quality scores ranged from 2 to 22 points out of a theoretical maximum of 27 points, with RCTs showing the highest overall scores.\n\n\nCONCLUSION\nThis review revealed the paucity of CST research in patients with different clinical pathologies. CST assessment is feasible in RCTs and has the potential of providing valuable outcomes to further support clinical decision making. However, due to the current moderate methodological quality of the included studies, further research is needed.",
"title": ""
},
{
"docid": "1de19775f0c32179f59674c7f0d8b540",
"text": "As the most commonly used bots in first-person shooter (FPS) online games, aimbots are notoriously difficult to detect because they are completely passive and resemble excellent honest players in many aspects. In this paper, we conduct the first field measurement study to understand the status quo of aimbots and how they play in the wild. For data collection purpose, we devise a novel and generic technique called baittarget to accurately capture existing aimbots from the two most popular FPS games. Our measurement reveals that cheaters who use aimbots cannot play as skillful as excellent honest players in all aspects even though aimbots can help them to achieve very high shooting performance. To characterize the unskillful and blatant nature of cheaters, we identify seven features, of which six are novel, and these features cannot be easily mimicked by aimbots. Leveraging this set of features, we propose an accurate and robust server-side aimbot detector called AimDetect. The core of AimDetect is a cascaded classifier that detects the inconsistency between performance and skillfulness of aimbots. We evaluate the efficacy and generality of AimDetect using the real game traces. Our results show that AimDetect can capture almost all of the aimbots with very few false positives and minor overhead.",
"title": ""
},
{
"docid": "961cc1dc7063706f8f66fc136da41661",
"text": "From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible \"statistical\" properties that are the object of learning. Much less attention has been given to defining what \"learning\" is in the context of \"statistical learning.\" One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.",
"title": ""
},
{
"docid": "3d56b369e10b29969132c44897d4cc4c",
"text": "Real-world object classes appear in imbalanced ratios. This poses a significant challenge for classifiers which get biased towards frequent classes. We hypothesize that improving the generalization capability of a classifier should improve learning on imbalanced datasets. Here, we introduce the first hybrid loss function that jointly performs classification and clustering in a single formulation. Our approach is based on an ‘affinity measure’ in Euclidean space that leads to the following benefits: (1) direct enforcement of maximum margin constraints on classification boundaries, (2) a tractable way to ensure uniformly spaced and equidistant cluster centers, (3) flexibility to learn multiple class prototypes to support diversity and discriminability in feature space. Our extensive experiments demonstrate the significant performance improvements on visual classification and verification tasks on multiple imbalanced datasets. The proposed loss can easily be plugged in any deep architecture as a differentiable block and demonstrates robustness against different levels of data imbalance and corrupted labels.",
"title": ""
},
{
"docid": "1ebf198459b98048404b706e4852eae2",
"text": "Network forensics is a branch of digital forensics, which applies to network security. It is used to relate monitoring and analysis of the computer network traffic, that helps us in collecting information and digital evidence, for the protection of network that can use as firewall and IDS. Firewalls and IDS can't always prevent and find out the unauthorized access within a network. This paper presents an extensive survey of several forensic frameworks. There is a demand of a system which not only detects the complex attack, but also it should be able to understand what had happened. Here it talks about the concept of the distributed network forensics. The concept of the Distributed network forensics is based on the distributed techniques, which are useful for providing an integrated platform for the automatic forensic evidence gathering and important data storage, valuable support and an attack attribution graph generation mechanism to depict hacking events.",
"title": ""
},
{
"docid": "fd0e31b2675a797c26af731ef1ff22df",
"text": "State representations critically affect the effectiveness of learning in robots. In this paper, we propose a roboticsspecific approach to learning such state representations. Robots accomplish tasks by interacting with the physical world. Physics in turn imposes structure on both the changes in the world and on the way robots can effect these changes. Using prior knowledge about interacting with the physical world, robots can learn state representations that are consistent with physics. We identify five robotic priors and explain how they can be used for representation learning. We demonstrate the effectiveness of this approach in a simulated slot car racing task and a simulated navigation task with distracting moving objects. We show that our method extracts task-relevant state representations from highdimensional observations, even in the presence of task-irrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning.",
"title": ""
},
{
"docid": "98b4e2d51efde6f4f8c43c29650b8d2f",
"text": "New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e.g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only nice to have but is in fact a necessary tool for designing embodied agents.",
"title": ""
},
{
"docid": "209203c297898a2251cfd62bdfc37296",
"text": "Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computerbased problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and differences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.",
"title": ""
},
{
"docid": "7735668d4f8407d9514211d9f5492ce6",
"text": "This revision to the EEG Guidelines is an update incorporating current EEG technology and practice. The role of the EEG in making the determination of brain death is discussed as are suggested technical criteria for making the diagnosis of electrocerebral inactivity.",
"title": ""
},
{
"docid": "e83227e0485cf7f3ba19ce20931bbc2f",
"text": "There has been an increased global demand for dermal filler injections in recent years. Although hyaluronic acid-based dermal fillers generally have a good safety profile, serious vascular complications have been reported. Here we present a typical case of skin necrosis following a nonsurgical rhinoplasty using hyaluronic acid filler. Despite various rescuing managements, unsightly superficial scars were left. It is critical for plastic surgeons and dermatologists to be familiar with the vascular anatomy and the staging of vascular complications. Any patients suspected to experience a vascular complication should receive early management under close monitoring. Meanwhile, the potentially devastating outcome caused by illegal practice calls for stricter regulations and law enforcement.",
"title": ""
},
{
"docid": "d559ace14dcc42f96d0a96b959a92643",
"text": "Graphs are an integral data structure for many parts of computation. They are highly effective at modeling many varied and flexible domains, and are excellent for representing the way humans themselves conceive of the world. Nowadays, there is lots of interest in working with large graphs, including social network graphs, “knowledge” graphs, and large bipartite graphs (for example, the Netflix movie matching graph).",
"title": ""
},
{
"docid": "f8093849e9157475149d00782c60ae60",
"text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.",
"title": ""
},
{
"docid": "9766e0507346e46e24790a4873979aa4",
"text": "Extreme learning machine (ELM) is proposed for solving a single-layer feed-forward network (SLFN) with fast learning speed and has been confirmed to be effective and efficient for pattern classification and regression in different fields. ELM originally focuses on the supervised, semi-supervised, and unsupervised learning problems, but just in the single domain. To our best knowledge, ELM with cross-domain learning capability in subspace learning has not been exploited very well. Inspired by a cognitive-based extreme learning machine technique (Cognit Comput. 6:376–390, 1; Cognit Comput. 7:263–278, 2.), this paper proposes a unified subspace transfer framework called cross-domain extreme learning machine (CdELM), which aims at learning a common (shared) subspace across domains. Three merits of the proposed CdELM are included: (1) A cross-domain subspace shared by source and target domains is achieved based on domain adaptation; (2) ELM is well exploited in the cross-domain shared subspace learning framework, and a new perspective is brought for ELM theory in heterogeneous data analysis; (3) the proposed method is a subspace learning framework and can be combined with different classifiers in recognition phase, such as ELM, SVM, nearest neighbor, etc. Experiments on our electronic nose olfaction datasets demonstrate that the proposed CdELM method significantly outperforms other compared methods.",
"title": ""
},
{
"docid": "9faf67646394dfedfef1b6e9152d9cf6",
"text": "Acoustic shooter localization systems are being rapidly deployed in the field. However, these are standalone systems---either wearable or vehicle-mounted---that do not have networking capability even though the advantages of widely distributed sensing for locating shooters have been demonstrated before. The reason for this is that certain disadvantages of wireless network-based prototypes made them impractical for the military. The system that utilized stationary single-channel sensors required many sensor nodes, while the multi-channel wearable version needed to track the absolute self-orientation of the nodes continuously, a notoriously hard task. This paper presents an approach that overcomes the shortcomings of past approaches. Specifically, the technique requires as few as five single-channel wireless sensors to provide accurate shooter localization and projectile trajectory estimation. Caliber estimation and weapon classification are also supported. In addition, a single node alone can provide reliable miss distance and range estimates based on a single shot as long as a reasonable assumption holds. The main contribution of the work and the focus of this paper is the novel sensor fusion technique that works well with a limited number of observations. The technique is thoroughly evaluated using an extensive shot library.",
"title": ""
},
{
"docid": "1b0cb70fb25d86443a01a313371a27ae",
"text": "We present a protocol for general state machine replication – a method that provides strong consistency – that has high performance in a wide-area network. In particular, our protocol Mencius has high throughput under high client load and low latency under low client load even under changing wide-area network environment and client load. We develop our protocol as a derivation from the well-known protocol Paxos. Such a development can be changed or further refined to take advantage of specific network or application requirements.",
"title": ""
},
{
"docid": "b36549a4b16c2c8ab50f1adda99f3120",
"text": "Spatial representations of time are a ubiquitous feature of human cognition. Nevertheless, interesting sociolinguistic variations exist with respect to where in space people locate temporal constructs. For instance, while in English time metaphorically flows horizontally, in Mandarin an additional vertical dimension is employed. Noting that the bilingual mind can flexibly accommodate multiple representations, the present work explored whether Mandarin-English bilinguals possess two mental time lines. Across two experiments, we demonstrated that Mandarin-English bilinguals do indeed employ both horizontal and vertical representations of time. Importantly, subtle variations to cultural context were seen to shape how these time lines were deployed.",
"title": ""
},
{
"docid": "41611606af8671f870fb90e50c2e99fc",
"text": "Pointwise label and pairwise label are both widely used in computer vision tasks. For example, supervised image classification and annotation approaches use pointwise label, while attribute-based image relative learning often adopts pairwise labels. These two types of labels are often considered independently and most existing efforts utilize them separately. However, pointwise labels in image classification and tag annotation are inherently related to the pairwise labels. For example, an image labeled with \"coast\" and annotated with \"beach, sea, sand, sky\" is more likely to have a higher ranking score in terms of the attribute \"open\", while \"men shoes\" ranked highly on the attribute \"formal\" are likely to be annotated with \"leather, lace up\" than \"buckle, fabric\". The existence of potential relations between pointwise labels and pairwise labels motivates us to fuse them together for jointly addressing related vision tasks. In particular, we provide a principled way to capture the relations between class labels, tags and attributes, and propose a novel framework PPP(Pointwise and Pairwise image label Prediction), which is based on overlapped group structure extracted from the pointwise-pairwise-label bipartite graph. With experiments on benchmark datasets, we demonstrate that the proposed framework achieves superior performance on three vision tasks compared to the state-of-the-art methods.",
"title": ""
},
{
"docid": "dc93d2204ff27c7d55a71e75d2ae4ca9",
"text": "Locating and securing an Alzheimer's patient who is outdoors and in wandering state is crucial to patient's safety. Although advances in geotracking and mobile technology have made locating patients instantly possible, reaching them while in wandering state may take time. However, a social network of caregivers may help shorten the time that it takes to reach and secure a wandering AD patient. This study proposes a new type of intervention based on novel mobile application architecture to form and direct a social support network of caregivers for locating and securing wandering patients as soon as possible. System employs, aside from the conventional tracking mechanism, a wandering detection mechanism, both of which operates through a tracking device installed a Subscriber Identity Module for Global System for Mobile Communications Network(GSM). System components are being implemented using Java. Family caregivers will be interviewed prior to and after the use of the system and Center For Epidemiologic Studies Depression Scale, Patient Health Questionnaire and Zarit Burden Interview will be applied to them during these interviews to find out the impact of the system in terms of depression, anxiety and burden, respectively.",
"title": ""
},
{
"docid": "acb3689c9ece9502897cebb374811f54",
"text": "In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.",
"title": ""
}
] |
scidocsrr
|
d0a0791c9c6f6d9ffdc2f4ebb05a8241
|
Big Data Analysis in Smart Manufacturing: A Review
|
[
{
"docid": "c12fb39060ec4dd2c7bb447352ea4e8a",
"text": "Lots of data from different domains is published as Linked Open Data (LOD). While there are quite a few browsers for such data, as well as intelligent tools for particular purposes, a versatile tool for deriving additional knowledge by mining the Web of Linked Data is still missing. In this system paper, we introduce the RapidMiner Linked Open Data extension. The extension hooks into the powerful data mining and analysis platform RapidMiner, and offers operators for accessing Linked Open Data in RapidMiner, allowing for using it in sophisticated data analysis workflows without the need for expert knowledge in SPARQL or RDF. The extension allows for autonomously exploring the Web of Data by following links, thereby discovering relevant datasets on the fly, as well as for integrating overlapping data found in different datasets. As an example, we show how statistical data from the World Bank on scientific publications, published as an RDF data cube, can be automatically linked to further datasets and analyzed using additional background knowledge from ten different LOD datasets.",
"title": ""
},
{
"docid": "150e7a6f46e93fc917e43e32dedd9424",
"text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.",
"title": ""
}
] |
[
{
"docid": "9f21792dbe89fa95d85e7210cf1de9c6",
"text": "Convolutional Neural Networks have provided state-of-the-art results in several computer vision problems. However, due to a large number of parameters in CNNs, they require a large number of training samples which is a limiting factor for small sample size problems. To address this limitation, we propose SSF-CNN which focuses on learning the \"structure\" and \"strength\" of filters. The structure of the filter is initialized using a dictionary based filter learning algorithm and the strength of the filter is learned using the small sample training data. The architecture provides the flexibility of training with both small and large training databases, and yields good accuracies even with small size training data. The effectiveness of the algorithm is first demonstrated on MNIST, CIFAR10, and NORB databases, with varying number of training samples. The results show that SSF-CNN significantly reduces the number of parameters required for training while providing high accuracies on the test databases. On small sample size problems such as newborn face recognition and Omniglot, it yields state-of-the-art results. Specifically, on the IIITD Newborn Face Database, the results demonstrate improvement in rank-1 identification accuracy by at least 10%.",
"title": ""
},
{
"docid": "f6ba46b72139f61cfb098656d71553ed",
"text": "This paper introduces the Voice Conversion Octave Toolbox made available to the public as open source. The first version of the toolbox features tools for VTLN-based voice conversion supporting a variety of warping functions. The authors describe the implemented functionality and how to configure the included tools.",
"title": ""
},
{
"docid": "792df318ee62c4e5409f53829c3de05c",
"text": "In this paper we present a novel technique to calibrate multiple casually aligned projectors on a fiducial-free cylindrical curved surface using a single camera. We impose two priors to the cylindrical display: (a) cylinder is a vertically extruded surface; and (b) the aspect ratio of the rectangle formed by the four corners of the screen is known. Using these priors, we can estimate the display's 3D surface geometry and camera extrinsic parameters using a single image without any explicit display to camera correspondences. Using the estimated camera and display properties, we design a novel deterministic algorithm to recover the intrinsic and extrinsic parameters of each projector using a single projected pattern seen by the camera which is then used to register the images on the display from any arbitrary viewpoint making it appropriate for virtual reality systems. Finally, our method can be extended easily to handle sharp corners — making it suitable for the common CAVE like VR setup. To the best of our knowledge, this is the first method that can achieve accurate geometric auto-calibration of multiple projectors on a cylindrical display without performing an extensive stereo reconstruction.",
"title": ""
},
{
"docid": "c663806c6b086b31e57a9d7e54a46d4b",
"text": "Deep neural networks are frequently used for computer vision, speech recognition and text processing. The reason is their ability to regress highly nonlinear functions. We present an end-to-end controller for steering autonomous vehicles based on a convolutional neural network (CNN). The deployed framework does not require explicit hand-engineered algorithms for lane detection, object detection or path planning. The trained neural net directly maps pixel data from a front-facing camera to steering commands and does not require any other sensors. We compare the controller performance with the steering behavior of a human driver.",
"title": ""
},
{
"docid": "53a1d344a6e38dd790e58c6952e51cdb",
"text": "The thermal conductivities of individual single crystalline intrinsic Si nanowires with diameters of 22, 37, 56, and 115 nm were measured using a microfabricated suspended device over a temperature range of 20–320 K. Although the nanowires had well-defined crystalline order, the thermal conductivity observed was more than two orders of magnitude lower than the bulk value. The strong diameter dependence of thermal conductivity in nanowires was ascribed to the increased phonon-boundary scattering and possible phonon spectrum modification. © 2003 American Institute of Physics.@DOI: 10.1063/1.1616981 #",
"title": ""
},
{
"docid": "f1744cf87ee2321c5132d6ee30377413",
"text": "How do movements in the distribution of income and wealth affect the macroeconomy? We analyze this question using a calibrated version of the stochastic growth model with partially uninsurable idiosyncratic risk and movements in aggregate productivity. Our main finding is that, in the stationary stochastic equilibrium, the behavior of the macroeconomic aggregates can be almost perfectly described using only the mean of the wealth distribution. This result is robust to substantial changes in both parameter values and model specification. Our benchmark model, whose only difference from the representative-agent framework is the existence of uninsurable idiosyncratic risk, displays far less cross-sectional dispersion",
"title": ""
},
{
"docid": "94c9eec9aa4f36bf6b2d83c3cc8dbb12",
"text": "Many real world security problems can be modelled as finite zero-sum games with structured sequential strategies and limited interactions between the players. An abstract class of games unifying these models are the normal-form games with sequential strategies (NFGSS). We show that all games from this class can be modelled as well-formed imperfect-recall extensiveform games and consequently can be solved by counterfactual regret minimization. We propose an adaptation of the CFR algorithm for NFGSS and compare its performance to the standard methods based on linear programming and incremental game generation. We validate our approach on two security-inspired domains. We show that with a negligible loss in precision, CFR can compute a Nash equilibrium with five times less computation than its competitors. Game theory has been recently used to model many real world security problems, such as protecting airports (Pita et al. 2008) or airplanes (Tsai et al. 2009) from terrorist attacks, preventing fare evaders form misusing public transport (Yin et al. 2012), preventing attacks in computer networks (Durkota et al. 2015), or protecting wildlife from poachers (Fang, Stone, and Tambe 2015). Many of these security problems are sequential in nature. Rather than a single monolithic action, the players’ strategies are formed by sequences of smaller individual decisions. For example, the ticket inspectors make a sequence of decisions about where to check tickets and which train to take; a network administrator protects the network against a sequence of actions an attacker uses to penetrate deeper into the network. Sequential decision making in games has been extensively studied from various perspectives. Recent years have brought significant progress in solving massive imperfectinformation extensive-form games with a focus on the game of poker. Counterfactual regret minimization (Zinkevich et al. 2008) is the family of algorithms that has facilitated much of this progress, with a recent incarnation (Tammelin et al. 2015) essentially solving for the first time a variant of poker commonly played by people (Bowling et al. 2015). However, there has not been any transfer of these results to research on real world security problems. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. We focus on an abstract class of sequential games that can model many sequential security games, such as games taking place in physical space that can be discretized as a graph. This class of games is called normal-form games with sequential strategies (NFGSS) (Bosansky et al. 2015) and it includes, for example, existing game theoretic models of ticket inspection (Jiang et al. 2013), border patrolling (Bosansky et al. 2015), and securing road networks (Jain et al. 2011). In this work we formally prove that any NFGSS can be modelled as a slightly generalized chance-relaxed skew well-formed imperfect-recall game (CRSWF) (Lanctot et al. 2012; Kroer and Sandholm 2014), a subclass of extensiveform games with imperfect recall in which counterfactual regret minimization is guaranteed to converge to the optimal strategy. We then show how to adapt the recent variant of the algorithm, CFR, directly to NFGSS and present experimental validation on two distinct domains modelling search games and ticket inspection. We show that CFR is applicable and efficient in domains with imperfect recall that are substantially different from poker. Moreover, if we are willing to sacrifice a negligible degree of approximation, CFR can find a solution substantially faster than methods traditionally used in research on security games, such as formulating the game as a linear program (LP) and incrementally building the game model by double oracle methods.",
"title": ""
},
{
"docid": "800dc3e6a3f58d2af1ed7cd526074d54",
"text": "The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at the same time remove any redundancies among the features to fully utilize the capacity of the network. Specifically, we propose to use an exclusive sparsity regularization based on (1, 2)-norm, which promotes competition for features between different weights, thus enforcing them to fit to disjoint sets of features. We further combine the exclusive sparsity with the group sparsity based on (2, 1)-norm, to promote both sharing and competition for features in training of a deep neural network. We validate our method on multiple public datasets, and the results show that our method can obtain more compact and efficient networks while also improving the performance over the base networks with full weights, as opposed to existing sparsity regularizations that often obtain efficiency at the expense of prediction accuracy.",
"title": ""
},
{
"docid": "b61042f2d5797e57e2bc395966bb7ad2",
"text": "A number of classifier fusion methods have been recently developed opening an alternative approach leading to a potential improvement in the classification performance. As there is little theory of information fusion itself, currently we are faced with different methods designed for different problems and producing different results. This paper gives an overview of classifier fusion methods and attempts to identify new trends that may dominate this area of research in future. A taxonomy of fusion methods trying to bring some order into the existing “pudding of diversities” is also provided.",
"title": ""
},
{
"docid": "83ba1d7915fc7cb73c86172970b1979e",
"text": "This paper presents a new modeling methodology accounting for generation and propagation of minority carriers that can be used directly in circuit-level simulators in order to estimate coupled parasitic currents. The method is based on a new compact model of basic components (p-n junction and resistance) and takes into account minority carriers at the boundary. An equivalent circuit schematic of the substrate is built by identifying these basic elements in the substrate and interconnecting them. Parasitic effects such as bipolar or latch-up effects result from the continuity of minority carriers guaranteed by the components' models. A structure similar to a half-bridge perturbing sensitive n-wells has been simulated. It is composed by four p-n junctions connected together by their common p-doped sides. The results are in good agreement with those obtained from physical device simulations.",
"title": ""
},
{
"docid": "7543281174d7dc63e180249d94ad6c07",
"text": "Enriching speech recognition output with sentence boundaries improves its human readability and enables further processing by downstream language processing modules. We have constructed a hidden Markov model (HMM) system to detect sentence boundaries that uses both prosodic and textual information. Since there are more nonsentence boundaries than sentence boundaries in the data, the prosody model, which is implemented as a decision tree classifier, must be constructed to effectively learn from the imbalanced data distribution. To address this problem, we investigate a variety of sampling approaches and a bagging scheme. A pilot study was carried out to select methods to apply to the full NIST sentence boundary evaluation task across two corpora (conversational telephone speech and broadcast news speech), using both human transcriptions and recognition output. In the pilot study, when classification error rate is the performance measure, using the original training set achieves the best performance among the sampling methods, and an ensemble of multiple classifiers from different downsampled training sets achieves slightly poorer performance, but has the potential to reduce computational effort. However, when performance is measured using receiver operating characteristics (ROC) or area under the curve (AUC), then the sampling approaches outperform the original training set. This observation is important if the 0885-2308/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.csl.2005.06.002 * Corresponding author. Tel.: +1 510 666 2993; fax: +510 666 2956. E-mail addresses: [email protected] (Y. Liu), [email protected] (N.V. Chawla), [email protected] (M.P. Harper), [email protected] (E. Shriberg), [email protected] (A. Stolcke). Y. Liu et al. / Computer Speech and Language 20 (2006) 468–494 469 sentence boundary detection output is used by downstream language processing modules. Bagging was found to significantly improve system performance for each of the sampling methods. The gain from these methods may be diminished when the prosody model is combined with the language model, which is a strong knowledge source for the sentence detection task. The patterns found in the pilot study were replicated in the full NIST evaluation task. The conclusions may be dependent on the task, the classifiers, and the knowledge combination approach. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e58e294dbacf605e40ff2f59cc4f8a6a",
"text": "There are fundamental similarities between sleep in mammals and quiescence in the arthropod Drosophila melanogaster, suggesting that sleep-like states are evolutionarily ancient. The nematode Caenorhabditis elegans also has a quiescent behavioural state during a period called lethargus, which occurs before each of the four moults. Like sleep, lethargus maintains a constant temporal relationship with the expression of the C. elegans Period homologue LIN-42 (ref. 5). Here we show that quiescence associated with lethargus has the additional sleep-like properties of reversibility, reduced responsiveness and homeostasis. We identify the cGMP-dependent protein kinase (PKG) gene egl-4 as a regulator of sleep-like behaviour, and show that egl-4 functions in sensory neurons to promote the C. elegans sleep-like state. Conserved effects on sleep-like behaviour of homologous genes in C. elegans and Drosophila suggest a common genetic regulation of sleep-like states in arthropods and nematodes. Our results indicate that C. elegans is a suitable model system for the study of sleep regulation. The association of this C. elegans sleep-like state with developmental changes that occur with larval moults suggests that sleep may have evolved to allow for developmental changes.",
"title": ""
},
{
"docid": "01b05ea8fcca216e64905da7b5508dea",
"text": "Generative Adversarial Networks (GANs) have recently emerged as powerful generative models. GANs are trained by an adversarial process between a generative network and a discriminative network. It is theoretically guaranteed that, in the nonparametric regime, by arriving at the unique saddle point of a minimax objective function, the generative network generates samples from the data distribution. However, in practice, getting close to this saddle point has proven to be difficult, resulting in the ubiquitous problem of “mode collapse”. The root of the problems in training GANs lies on the unbalanced nature of the game being played. Here, we propose to level the playing field and make the minimax game balanced by “heating” the data distribution. The empirical distribution is frozen at temperature zero; GANs are instead initialized at infinite temperature, where learning is stable. By annealing the heated data distribution, we initialized the network at each temperature with the learnt parameters of the previous higher temperature. We posited a conjecture that learning under continuous annealing in the nonparametric regime is stable, and proposed an algorithm in corollary. In our experiments, the annealed GAN algorithm, dubbed β-GAN, trained with unmodified objective function was stable and did not suffer from mode collapse.",
"title": ""
},
{
"docid": "852ff3b52b4bf8509025cb5cb751899f",
"text": "Digital images are ubiquitous in our modern lives, with uses ranging from social media to news, and even scientific papers. For this reason, it is crucial evaluate how accurate people are when performing the task of identify doctored images. In this paper, we performed an extensive user study evaluating subjects capacity to detect fake images. After observing an image, users have been asked if it had been altered or not. If the user answered the image has been altered, he had to provide evidence in the form of a click on the image. We collected 17,208 individual answers from 383 users, using 177 images selected from public forensic databases. Different from other previously studies, our method propose different ways to avoid lucky guess when evaluating users answers. Our results indicate that people show inaccurate skills at differentiating between altered and non-altered images, with an accuracy of 58%, and only identifying the modified images 46.5% of the time. We also track user features such as age, answering time, confidence, providing deep analysis of how such variables influence on the users’ performance.",
"title": ""
},
{
"docid": "c3be24db41e57658793281a9765635c0",
"text": "A boundary element method (BEM) simulation is used to compare the efficiency of numerical inverse Laplace transform strategies, considering general requirements of Laplace-space numerical approaches. The two-dimensional BEM solution is used to solve the Laplace-transformed diffusion equation, producing a time-domain solution after a numerical Laplace transform inversion. Motivated by the needs of numerical methods posed in Laplace-transformed space, we compare five inverse Laplace transform algorithms and discuss implementation techniques to minimize the number of Laplace-space function evaluations. We investigate the ability to calculate a sequence of time domain values using the fewest Laplace-space model evaluations. We find Fourier-series based inversion algorithms work for common time behaviors, are the most robust with respect to free parameters, and allow for straightforward image function evaluation re-use across at least a log cycle of time.",
"title": ""
},
{
"docid": "5594475c91355d113e0045043eff8b93",
"text": "Background: Since the introduction of the systematic review process to Software Engineering in 2004, researchers have investigated a number of ways to mitigate the amount of effort and time taken to filter through large volumes of literature.\n Aim: This study aims to provide a critical analysis of text mining techniques used to support the citation screening stage of the systematic review process.\n Method: We critically re-reviewed papers included in a previous systematic review which addressed the use of text mining methods to support the screening of papers for inclusion in a review. The previous review did not provide a detailed analysis of the text mining methods used. We focus on the availability in the papers of information about the text mining methods employed, including the description and explanation of the methods, parameter settings, assessment of the appropriateness of their application given the size and dimensionality of the data used, performance on training, testing and validation data sets, and further information that may support the reproducibility of the included studies.\n Results: Support Vector Machines (SVM), Naïve Bayes (NB) and Committee of classifiers (Ensemble) are the most used classification algorithms. In all of the studies, features were represented with Bag-of-Words (BOW) using both binary features (28%) and term frequency (66%). Five studies experimented with n-grams with n between 2 and 4, but mostly the unigram was used. χ2, information gain and tf-idf were the most commonly used feature selection techniques. Feature extraction was rarely used although LDA and topic modelling were used. Recall, precision, F and AUC were the most used metrics and cross validation was also well used. More than half of the studies used a corpus size of below 1,000 documents for their experiments while corpus size for around 80% of the studies was 3,000 or fewer documents. The major common ground we found for comparing performance assessment based on independent replication of studies was the use of the same dataset but a sound performance comparison could not be established because the studies had little else in common. In most of the studies, insufficient information was reported to enable independent replication. The studies analysed generally did not include any discussion of the statistical appropriateness of the text mining method that they applied. In the case of applications of SVM, none of the studies report the number of support vectors that they found to indicate the complexity of the prediction engine that they use, making it impossible to judge the extent to which over-fitting might account for the good performance results.\n Conclusions: There is yet to be concrete evidence about the effectiveness of text mining algorithms regarding their use in the automation of citation screening in systematic reviews. The studies indicate that options are still being explored, but there is a need for better reporting as well as more explicit process details and access to datasets to facilitate study replication for evidence strengthening. In general, the reader often gets the impression that text mining algorithms were applied as magic tools in the reviewed papers, relying on default settings or default optimization of available machine learning toolboxes without an in-depth understanding of the statistical validity and appropriateness of such tools for text mining purposes.",
"title": ""
},
{
"docid": "a2dfa8007b3a13da31a768fe07393d15",
"text": "Predicting the time and effort for a software problem has long been a difficult task. We present an approach that automatically predicts the fixing effort, i.e., the person-hours spent on fixing an issue. Our technique leverages existing issue tracking systems: given a new issue report, we use the Lucene framework to search for similar, earlier reports and use their average time as a prediction. Our approach thus allows for early effort estimation, helping in assigning issues and scheduling stable releases. We evaluated our approach using effort data from the JBoss project. Given a sufficient number of issues reports, our automatic predictions are close to the actual effort; for issues that are bugs, we are off by only one hour, beating na¨ýve predictions by a factor of four.",
"title": ""
},
{
"docid": "08025e6ed1ee71596bdc087bfd646eac",
"text": "A method is presented for computing an orthonormal set of eigenvectors for the discrete Fourier transform (DFT). The technique is based on a detailed analysis of the eigenstructure of a special matrix which commutes with the DFT. It is also shown how fractional powers of the DFT can be efficiently computed, and possible applications to multiplexing and transform coding are suggested. T",
"title": ""
},
{
"docid": "3cfa45816c57cbbe1d86f7cce7f52967",
"text": "Video games have become one of the favorite activities of American children. A growing body of research is linking violent video game play to aggressive cognitions, attitudes, and behaviors. The first goal of this study was to document the video games habits of adolescents and the level of parental monitoring of adolescent video game use. The second goal was to examine associations among violent video game exposure, hostility, arguments with teachers, school grades, and physical fights. In addition, path analyses were conducted to test mediational pathways from video game habits to outcomes. Six hundred and seven 8th- and 9th-grade students from four schools participated. Adolescents who expose themselves to greater amounts of video game violence were more hostile, reported getting into arguments with teachers more frequently, were more likely to be involved in physical fights, and performed more poorly in school. Mediational pathways were found such that hostility mediated the relationship between violent video game exposure and outcomes. Results are interpreted within and support the framework of the General Aggression Model.",
"title": ""
},
{
"docid": "2c8bfb9be08edfdac6d335bdcffe204c",
"text": "Undoubtedly, the age of big data has opened new options for natural disaster management, primarily because of the varied possibilities it provides in visualizing, analyzing, and predicting natural disasters. From this perspective, big data has radically changed the ways through which human societies adopt natural disaster management strategies to reduce human suffering and economic losses. In a world that is now heavily dependent on information technology, the prime objective of computer experts and policy makers is to make the best of big data by sourcing information from varied formats and storing it in ways that it can be effectively used during different stages of natural disaster management. This paper aimed at making a systematic review of the literature in analyzing the role of big data in natural disaster management and highlighting the present status of the technology in providing meaningful and effective solutions in natural disaster management. The paper has presented the findings of several researchers on varied scientific and technological perspectives that have a bearing on the efficacy of big data in facilitating natural disaster management. In this context, this paper reviews the major big data sources, the associated achievements in different disaster management phases, and emerging technological topics associated with leveraging this new ecosystem of Big Data to monitor and detect natural hazards, mitigate their effects, assist in relief efforts, and contribute to the recovery and reconstruction processes.",
"title": ""
}
] |
scidocsrr
|
3031061c001d65189e472e38d315958e
|
UBIRIS: A Noisy Iris Image Database
|
[
{
"docid": "d82e41bcf0d25a728ddbad1dd875bd16",
"text": "With an increasing emphasis on security, automated personal identification based on biometrics has been receiving extensive attention over the past decade. Iris recognition, as an emerging biometric recognition approach, is becoming a very active topic in both research and practical applications. In general, a typical iris recognition system includes iris imaging, iris liveness detection, and recognition. This paper focuses on the last issue and describes a new scheme for iris recognition from an image sequence. We first assess the quality of each image in the input sequence and select a clear iris image from such a sequence for subsequent recognition. A bank of spatial filters, whose kernels are suitable for iris recognition, is then used to capture local characteristics of the iris so as to produce discriminating texture features. Experimental results show that the proposed method has an encouraging performance. In particular, a comparative study of existing methods for iris recognition is conducted on an iris image database including 2,255 sequences from 213 subjects. Conclusions based on such a comparison using a nonparametric statistical method (the bootstrap) provide useful information for further research.",
"title": ""
},
{
"docid": "e02a7947c8ffb6fc6abeb2854ef2afd7",
"text": "This paper examines automated iris recognition as a biometri· ca/ly based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular, the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remate examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.",
"title": ""
}
] |
[
{
"docid": "dbf694e11b78835dbc31ef4249bfff73",
"text": "Insider attacks are a well-known problem acknowledged as a threat as early as 1980s. The threat is attributed to legitimate users who abuse their privileges, and given their familiarity and proximity to the computational environment, can easily cause significant damage or losses. Due to the lack of tools and techniques, security analysts do not correctly perceive the threat, and hence consider the attacks as unpreventable. In this paper, we present a theory of insider threat assessment. First, we describe a modeling methodology which captures several aspects of insider threat, and subsequently, show threat assessment methodologies to reveal possible attack strategies of an insider.",
"title": ""
},
{
"docid": "684a6453972cb2a054b82af4fd1d1713",
"text": "Adoptive cell therapy (ACT) is a highly personalized cancer therapy that involves administration to the cancer-bearing host of immune cells with direct anticancer activity. ACT using naturally occurring tumor-reactive lymphocytes has mediated durable, complete regressions in patients with melanoma, probably by targeting somatic mutations exclusive to each cancer. These results have expanded the reach of ACT to the treatment of common epithelial cancers. In addition, the ability to genetically engineer lymphocytes to express conventional T cell receptors or chimeric antigen receptors has further extended the successful application of ACT for cancer treatment.",
"title": ""
},
{
"docid": "6af336fb0d0381b8fcb5f361b702de11",
"text": "We highlight an important frontier in algorithmic fairness: disparity in the quality of natural language processing algorithms when applied to language from authors of dierent social groups. For example, current systems sometimes analyze the language of females and minorities more poorly than they do of whites and males. We conduct an empirical analysis of racial disparity in language identication for tweets wrien in African-American English, and discuss implications of disparity in NLP.",
"title": ""
},
{
"docid": "b656dbd52405a867b1a8a2914e4cd494",
"text": "Clustering is one of the most important techniques of data mining. Clustering technique in data mining is an unsupervised machine learning algorithm that finds the groups of object such that objects in one group will be similar to one another and are dissimilar to the objects belonging to other clusters. Clustering is called unsupervised machine learning algorithm as groups are not predefined but defined by the data. So the most similar data are grouped into the clusters. In this paper, we compare five clustering algorithm namely Farthest first, MakeDensityBasedClusterer, Simple K-means, EM, Hierarchical clustering algorithm for recommending the course to the student based on student course selection & present the result. According to our simulation, we find that Simple K-means works better than other algorithms.",
"title": ""
},
{
"docid": "b23062a79449ff202c913b6a8e0b967b",
"text": "Analytics are at the core of many business intelligence tasks. Efficient query execution is facilitated by advanced hardware features, such as multi-core parallelism, shared-nothing low-latency caches, and SIMD vector instructions. Only recently, the SIMD capabilities of mainstream hardware have been augmented with wider vectors and non-contiguous loads termed gathers. While analytical DBMSs minimize the use of indexes in favor of scans based on sequential memory accesses, some data structures remain crucial. The Bloom filter, one such example, is the most efficient structure for filtering tuples based on their existence in a set and its performance is critical when joining tables with vastly different cardinalities. We introduce a vectorized implementation for probing Bloom filters based on gathers that eliminates conditional control flow and is independent of the SIMD length. Our techniques are generic and can be reused for accelerating other database operations. Our evaluation indicates a significant performance improvement over scalar code that can exceed 3X when the Bloom filter is cache-resident.",
"title": ""
},
{
"docid": "0e68120ea21beb2fdaff6538aa342aa5",
"text": "The development of a truly non-invasive continuous glucose sensor is an elusive goal. We describe the rise and fall of the Pendra device. In 2000 the company Pendragon Medical introduced a truly non-invasive continuous glucose-monitoring device. This system was supposed to work through so-called impedance spectroscopy. Pendra was Conformité Européenne (CE) approved in May 2003. For a short time the Pendra was available on the Dutch direct-to-consumer market. A post-marketing reliability study was performed in six type 1 diabetes patients. Mean absolute difference between Pendra glucose values and values obtained through self-monitoring of blood glucose was 52%; the Pearson’s correlation coefficient was 35.1%; and a Clarke error grid showed 4.3% of the Pendra readings in the potentially dangerous zone E. We argue that the CE certification process for continuous glucose sensors should be made more transparent, and that a consensus on specific requirements for continuous glucose sensors is needed to prevent patient exposure to potentially dangerous situations.",
"title": ""
},
{
"docid": "8030903c8f1402044bc5bce9daa1644d",
"text": "We propose a generalization of exTNFS algorithm recently introduced by Kim and Barbulescu (CRYPTO 2016). The algorithm, exTNFS, is a state-of-the-art algorithm for discrete logarithm in Fpn in the medium prime case, but it only applies when n = ηκ is a composite with nontrivial factors η and κ such that gcd(η, κ) = 1. Our generalization, however, shows that exTNFS algorithm can be also adapted to the setting with an arbitrary composite n maintaining its best asymptotic complexity. We show that one can solve discrete logarithm in medium case in the running time of Lpn(1/3, 3 √ 48/9) (resp. Lpn(1/3, 1.71) if multiple number fields are used), where n is an arbitrary composite. This should be compared with a recent variant by Sarkar and Singh (Asiacrypt 2016) that has the fastest running time of Lpn(1/3, 3 √ 64/9) (resp. Lpn(1/3, 1.88)) when n is a power of prime 2. When p is of special form, the complexity is further reduced to Lpn(1/3, 3 √ 32/9). On the practical side, we emphasize that the keysize of pairing-based cryptosystems should be updated following to our algorithm if the embedding degree n remains composite.",
"title": ""
},
{
"docid": "c796a0c9fd09f795a32f2ef09b1c0405",
"text": "Vectors of data are at the heart of machine learning and data mining. Recently, vector quantization methods have shown great promise in reducing both the time and space costs of operating on vectors. We introduce a vector quantization algorithm that can compress vectors over 12x faster than existing techniques while also accelerating approximate vector operations such as distance and dot product computations by up to 10x. Because it can encode over 2GB of vectors per second, it makes vector quantization cheap enough to employ in many more circumstances. For example, using our technique to compute approximate dot products in a nested loop can multiply matrices faster than a state-of-the-art BLAS implementation, even when our algorithm must first compress the matrices. In addition to showing the above speedups, we demonstrate that our approach can accelerate nearest neighbor search and maximum inner product search by over 100x compared to floating point operations and 10x compared to other vector quantization methods. Our approximate Euclidean distance and dot product computations are not only faster than those of related algorithms with slower encodings, but also faster than Hamming distance computations, which have direct hardware support on the tested platforms. We also assess the errors of our algorithm's approximate distances and dot products, and find that it is competitive with existing, slower vector quantization algorithms.",
"title": ""
},
{
"docid": "3a53831731ec16edf54877c610ae4384",
"text": "We propose a position-based approach for largescale simulations of rigid bodies at interactive frame-rates. Our method solves positional constraints between rigid bodies and therefore integrates nicely with other position-based methods. Interaction of particles and rigid bodies through common constraints enables two-way coupling with deformables. The method exhibits exceptional performance and stability while being user-controllable and easy to implement. Various results demonstrate the practicability of our method for the resolution of collisions, contacts, stacking and joint constraints.",
"title": ""
},
{
"docid": "e06bcad453906dbe3f7ea370de85a431",
"text": "Traditionally in the social sciences, causal mediation analysis has been formulated, understood, and implemented within the framework of linear structural equation models. We argue and demonstrate that this is problematic for 3 reasons: the lack of a general definition of causal mediation effects independent of a particular statistical model, the inability to specify the key identification assumption, and the difficulty of extending the framework to nonlinear models. In this article, we propose an alternative approach that overcomes these limitations. Our approach is general because it offers the definition, identification, estimation, and sensitivity analysis of causal mediation effects without reference to any specific statistical model. Further, our approach explicitly links these 4 elements closely together within a single framework. As a result, the proposed framework can accommodate linear and nonlinear relationships, parametric and nonparametric models, continuous and discrete mediators, and various types of outcome variables. The general definition and identification result also allow us to develop sensitivity analysis in the context of commonly used models, which enables applied researchers to formally assess the robustness of their empirical conclusions to violations of the key assumption. We illustrate our approach by applying it to the Job Search Intervention Study. We also offer easy-to-use software that implements all our proposed methods.",
"title": ""
},
{
"docid": "a609c651c1828b026b00a25454194bc5",
"text": "Real-time scene understanding has become crucial in many applications such as autonomous driving. In this paper, we propose a deep architecture, called BlitzNet, that jointly performs object detection and semantic segmentation in one forward pass, allowing real-time computations. Besides the computational gain of having a single network to perform several tasks, we show that object detection and semantic segmentation benefit from each other in terms of accuracy. Experimental results for VOC and COCO datasets show state-of-the-art performance for object detection and segmentation among real time systems.",
"title": ""
},
{
"docid": "f3818583b2b010d2870e1e7c112d4936",
"text": "In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs.",
"title": ""
},
{
"docid": "89072077936c4d152cddb963e501b25c",
"text": "We present a compact model for source-to-drain tunneling current in sub-10-nm gate-all-around FinFET, where tunneling current becomes nonnegligible. Wentzel–Kramers–Brillouin method with a quadratic potential energy profile is used to analytically capture the dependence on biases in the tunneling probability expression and simplify the equation. The calculated tunneling probability increases with smaller effective mass and with increasing bias. We at first use the Gaussian quadrature method to integrate Landauer’s equation for tunneling current computation without further approximations. To boost simulation speed, some approximations are made. The simplified equation shows a good accuracy and has more flexibility for compact model purpose. The model is implemented into industry standard Berkeley Short-channel IGFET Model-common multi-gate model for future technology node, and is validated by the full-band atomistic quantum transport simulation data.",
"title": ""
},
{
"docid": "31ccfd3694ac87cf42f9ca9bc74cc0f4",
"text": "This paper presents a highly accurate and efficient method for crack detection using percolation-based image processing. The detection of cracks in concrete surfaces during the maintenance and diagnosis of concrete structures is important to ensure the safety of these structures. Recently, the image-based crack detection method has attracted considerable attention due to its low cost and objectivity. However, there are several problems in the practical application of image processing for crack detection since real concrete surface images have noises such as concrete blebs, stains, and shadings of several sizes. In order to resolve these problems, our proposed method focuses on the number of pixels in a crack and the connectivity of the pixels. Our method employs a percolation model for crack detection in order to consider the features of the cracks. Through experiments using real concrete surface images, we demonstrate the accuracy and efficiency of our method.",
"title": ""
},
{
"docid": "6773b060fd16b6630f581eb65c5c6488",
"text": "Proximity detection is one of the most common location-based applications in daily life when users intent to find their friends who get into their proximity. Studies on protecting user privacy information during the detection process have been widely concerned. In this paper, we first analyze a theoretical and experimental analysis of existing solutions for proximity detection, and then demonstrate that these solutions either provide a weak privacy preserving or result in a high communication and computational complexity. Accordingly, a location difference-based proximity detection protocol is proposed based on the Paillier cryptosystem for the purpose of dealing with the above shortcomings. The analysis results through an extensive simulation illustrate that our protocol outperforms traditional protocols in terms of communication and computation cost.",
"title": ""
},
{
"docid": "1dd8599c88a29ed0c4cfd0a502b50b71",
"text": "Providing customer support through social media channels is gaining increasing popularity. In such a context, automatic detection and analysis of the emotions expressed by customers is important, as is identification of the emotional techniques (e.g., apology, empathy, etc.) in the responses of customer service agents. Result of such an analysis can help assess the quality of such a service, help and inform agents about desirable responses, and help develop automated service agents for social media interactions. In this paper, we show that, in addition to text based turn features, dialogue features can significantly improve detection of emotions in social media customer service dialogues and help predict emotional techniques used by customer service agents.",
"title": ""
},
{
"docid": "8e3b73204d1d62337c4b2aabdbaa8973",
"text": "The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space. We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary. Through a systematic empirical investigation, we show that state-of-the-art deep nets learn connected classification regions, and that the decision boundary in the vicinity of datapoints is flat along most directions. We further draw an essential connection between two seemingly unrelated properties of deep networks: their sensitivity to additive perturbations in the inputs, and the curvature of their decision boundary. The directions where the decision boundary is curved in fact characterize the directions to which the classifier is the most vulnerable. We finally leverage a fundamental asymmetry in the curvature of the decision boundary of deep nets, and propose a method to discriminate between original images, and images perturbed with small adversarial examples. We show the effectiveness of this purely geometric approach for detecting small adversarial perturbations in images, and for recovering the labels of perturbed images.",
"title": ""
},
{
"docid": "ebf06033624b52607ed767019fdfd1c8",
"text": "In Taiwan elementary schools, Scratch programming has been taught for more than four years. Previous studies have shown that personal annotations is a useful learning method that improve learning performance. An annotation-based Scratch programming (ASP) system provides for the creation, share, and review of annotations and homework solutions in the interface of Scratch programming. In addition, we combine the ASP system with the problem solving-based teaching approach in Scratch programming pedagogy, which boosts cognition development and enhances learning achievements. This study is aimed at exploring the effects of annotations and homework on learning achievement. A quasi-experimental method was used with elementary school students in a Scratch programming course over a 4-month period. The experimental results revealed that students’ thoughts and solutions in solving homework assignments have a significant influence on learning achievement. We further investigated that only making annotations in solving homework activities, among all other variables (the quantity of annotations, the quantity of one’s own annotations reviewed, the quantity of peers’ annotations reviewed, the quantity of one’s own homework solutions reviewed, and the quantity of peers’ homework solutions reviewed), can significantly predict learning achievements.",
"title": ""
},
{
"docid": "ef2738cfced7ef069b13e5b5dca1558b",
"text": "Organic agriculture (OA) is practiced on 1% of the global agricultural land area and its importance continues to grow. Specifically, OA is perceived by many as having less Advances inAgronomy, ISSN 0065-2113 © 2016 Elsevier Inc. http://dx.doi.org/10.1016/bs.agron.2016.05.003 All rights reserved. 1 ARTICLE IN PRESS",
"title": ""
},
{
"docid": "31ab58f42f5f34f765d28aead4ae7fe3",
"text": "Machine learning (ML) has become a core component of many real-world applications and training data is a key factor that drives current progress. This huge success has led Internet companies to deploy machine learning as a service (MLaaS). Recently, the first membership inference attack has shown that extraction of information on the training set is possible in such MLaaS settings, which has severe security and privacy implications. However, the early demonstrations of the feasibility of such attacks have many assumptions on the adversary, such as using multiple so-called shadow models, knowledge of the target model structure, and having a dataset from the same distribution as the target model’s training data. We relax all these key assumptions, thereby showing that such attacks are very broadly applicable at low cost and thereby pose a more severe risk than previously thought. We present the most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains. In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.",
"title": ""
}
] |
scidocsrr
|
91380eb925f106edf8ef1d44f266a0cb
|
Rain Bar: Robust Application-Driven Visual Communication Using Color Barcodes
|
[
{
"docid": "046f15ecf1037477b10bfb4fa315c9c9",
"text": "With the rapid proliferation of camera-equipped smart devices (e.g., smartphones, pads, tablets), visible light communication (VLC) over screen-camera links emerges as a novel form of near-field communication. Such communication via smart devices is highly competitive for its user-friendliness, security, and infrastructure-less (i.e., no dependency on WiFi or cellular infrastructure). However, existing approaches mostly focus on improving the transmission speed and ignore the transmission reliability. Considering the interplay between the transmission speed and reliability towards effective end-to-end communication, in this paper, we aim to boost the throughput over screen-camera links by enhancing the transmission reliability. To this end, we propose RDCode, a robust dynamic barcode which enables a novel packet-frame-block structure. Based on the layered structure, we design different error correction schemes at three levels: intra-blocks, inter-blocks and inter-frames, in order to verify and recover the lost blocks and frames. Finally, we implement RDCode and experimentally show that RDCode reaches a high level of transmission reliability (e.g., reducing the error rate to 10%) and yields a at least doubled transmission rate, compared with the existing state-of-the-art approach COBRA.",
"title": ""
}
] |
[
{
"docid": "600ecbb2ae0e5337a568bb3489cd5e29",
"text": "This paper presents a novel approach for haptic object recognition with an anthropomorphic robot hand. Firstly, passive degrees of freedom are introduced to the tactile sensor system of the robot hand. This allows the planar tactile sensor patches to optimally adjust themselves to the object's surface and to acquire additional sensor information for shape reconstruction. Secondly, this paper presents an approach to classify an object directly from the haptic sensor data acquired by a palpation sequence with the robot hand - without building a 3d-model of the object. Therefore, a finite set of essential finger positions and tactile contact patterns are identified which can be used to describe a single palpation step. A palpation sequence can then be merged into a simple statistical description of the object and finally be classified. The proposed approach for haptic object recognition and the new tactile sensor system are evaluated with an anthropomorphic robot hand.",
"title": ""
},
{
"docid": "f702a8c28184a6d49cd2f29a1e4e7ea4",
"text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.",
"title": ""
},
{
"docid": "b4b6417ea0e1bc70c5faa50f8e2edf59",
"text": "As secure processing as well as correct recovery of data getting more important, digital forensics gain more value each day. This paper investigates the digital forensics tools available on the market and analyzes each tool based on the database perspective. We present a survey of digital forensics tools that are either focused on data extraction from databases or assist in the process of database recovery. In our work, a detailed list of current database extraction software is provided. We demonstrate examples of database extractions executed on representative selections from among tools provided in the detailed list. We use a standard sample database with each tool for comparison purposes. Based on the execution results obtained, we compare these tools regarding different criteria such as runtime, static or live acquisition, and more.",
"title": ""
},
{
"docid": "070a1de608a35cddb69b84d5f081e94d",
"text": "Identifying potentially vulnerable locations in a code base is critical as a pre-step for effective vulnerability assessment; i.e., it can greatly help security experts put their time and effort to where it is needed most. Metric-based and pattern-based methods have been presented for identifying vulnerable code. The former relies on machine learning and cannot work well due to the severe imbalance between non-vulnerable and vulnerable code or lack of features to characterize vulnerabilities. The latter needs the prior knowledge of known vulnerabilities and can only identify similar but not new types of vulnerabilities. In this paper, we propose and implement a generic, lightweight and extensible framework, LEOPARD, to identify potentially vulnerable functions through program metrics. LEOPARD requires no prior knowledge about known vulnerabilities. It has two steps by combining two sets of systematically derived metrics. First, it uses complexity metrics to group the functions in a target application into a set of bins. Then, it uses vulnerability metrics to rank the functions in each bin and identifies the top ones as potentially vulnerable. Our experimental results on 11 real-world projects have demonstrated that, LEOPARD can cover 74.0% of vulnerable functions by identifying 20% of functions as vulnerable and outperform machine learning-based and static analysis-based techniques. We further propose three applications of LEOPARD for manual code review and fuzzing, through which we discovered 22 new bugs in real applications like PHP, radare2 and FFmpeg, and eight of them are new vulnerabilities.",
"title": ""
},
{
"docid": "09168164e47fd781e4abeca45fb76c35",
"text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].",
"title": ""
},
{
"docid": "4bd123c2c44e703133e9a6093170db39",
"text": "This paper presents a single-phase cascaded H-bridge converter for a grid-connected photovoltaic (PV) application. The multilevel topology consists of several H-bridge cells connected in series, each one connected to a string of PV modules. The adopted control scheme permits the independent control of each dc-link voltage, enabling, in this way, the tracking of the maximum power point for each string of PV panels. Additionally, low-ripple sinusoidal-current waveforms are generated with almost unity power factor. The topology offers other advantages such as the operation at lower switching frequency or lower current ripple compared to standard two-level topologies. Simulation and experimental results are presented for different operating conditions.",
"title": ""
},
{
"docid": "ca3c3dec83821747896d44261ba2f9ad",
"text": "Building discriminative representations for 3D data has been an important task in computer graphics and computer vision research. Convolutional Neural Networks (CNNs) have shown to operate on 2D images with great success for a variety of tasks. Lifting convolution operators to 3D (3DCNNs) seems like a plausible and promising next step. Unfortunately, the computational complexity of 3D CNNs grows cubically with respect to voxel resolution. Moreover, since most 3D geometry representations are boundary based, occupied regions do not increase proportionately with the size of the discretization, resulting in wasted computation. In this work, we represent 3D spaces as volumetric fields, and propose a novel design that employs field probing filters to efficiently extract features from them. Each field probing filter is a set of probing points — sensors that perceive the space. Our learning algorithm optimizes not only the weights associated with the probing points, but also their locations, which deforms the shape of the probing filters and adaptively distributes them in 3D space. The optimized probing points sense the 3D space “intelligently”, rather than operating blindly over the entire domain. We show that field probing is significantly more efficient than 3DCNNs, while providing state-of-the-art performance, on classification tasks for 3D object recognition benchmark datasets.",
"title": ""
},
{
"docid": "741ba628eacb59d7b9f876520406e600",
"text": "Awareness of the physical location for each node is required by many wireless sensor network applications. The discovery of the position can be realized utilizing range measurements including received signal strength, time of arrival, time difference of arrival and angle of arrival. In this paper, we focus on localization techniques based on angle of arrival information between neighbor nodes. We propose a new localization and orientation scheme that considers beacon information multiple hops away. The scheme is derived under the assumption of noisy angle measurements. We show that the proposed method achieves very good accuracy and precision despite inaccurate angle measurements and a small number of beacons",
"title": ""
},
{
"docid": "75f895ff76e7a55d589ff30637524756",
"text": "This paper details the coreference resolution system submitted by Stanford at the CoNLL2011 shared task. Our system is a collection of deterministic coreference resolution models that incorporate lexical, syntactic, semantic, and discourse information. All these models use global document-level information by sharing mention attributes, such as gender and number, across mentions in the same cluster. We participated in both the open and closed tracks and submitted results using both predicted and gold mentions. Our system was ranked first in both tracks, with a score of 57.8 in the closed track and 58.3 in the open track.",
"title": ""
},
{
"docid": "ba6b016ace0c098ab345cd5a01af470d",
"text": "This paper describes a vehicle detection system fusing radar and vision data. Radar data are used to locate areas of interest on images. Vehicle search in these areas is mainly based on vertical symmetry. All the vehicles found in different image areas are mixed together, and a series of filters is applied in order to delete false detections. In order to speed up and improve system performance, guard rail detection and a method to manage overlapping areas are also included. Both methods are explained and justified in this paper. The current algorithm analyzes images on a frame-by-frame basis without any temporal correlation. Two different statistics, namely: 1) frame based and 2) event based, are computed to evaluate vehicle detection efficiency, while guard rail detection efficiency is computed in terms of time savings and correct detection rates. Results and problems are discussed, and directions for future enhancements are provided",
"title": ""
},
{
"docid": "be9b4dbfc747daf36894d6fe11b0db4e",
"text": "type: op op_type: Conv name: conv1 inputs: [ bottom, weight ] outputs: [ top ] location: ip: 127.0.0.1 device: 0 thread: 1 other fields ... Example Op defined in YAML Location: The location that the blob/op resides on, including: ● ip address of the target machine ● what device it is on (CPU/GPU) Thread: Thread is needed for op because both CPU and GPU can be multiple threaded (Streams in terms of NVIDIA GPU).",
"title": ""
},
{
"docid": "7d5556e2bfd8ca3dbc5817e9575148fc",
"text": "We present in this paper a calibration program that controls a calibration board integrated in a Smart Electrical Energy Meter (SEEM). The “SEEM” allows to measure the energy from a single phase line and transmits the value of this energy to a central through a wireless network. The “SEEM” needs to be calibrated in only one point of load to correct the gain and compensate the phase added by the system of measure. Since the calibration is performed for one point of load, this reduces the material used, therefore reduces the cost. Furthermore, the calibration of gain and phase is performed simultaneously which decrease the time of this operation.",
"title": ""
},
{
"docid": "0109c8c7663df5e8ac2abd805924d9f6",
"text": "To ensure system stability and availability during disturbances, industrial facilities equipped with on-site generation, generally utilize some type of load shedding scheme. In recent years, conventional underfrequency and PLC-based load shedding schemes have been integrated with computerized power management systems to provide an “automated” load shedding system. However, these automated solutions lack system operating knowledge and are still best-guess methods which typically result in excessive or insufficient load shedding. An intelligent load shedding system can provide faster and optimal load relief by utilizing actual operating conditions and knowledge of past system disturbances. This paper presents the need for an intelligent, automated load shedding system. Simulation of case studies for two industrial electrical networks are performed to demonstrate the advantages of an intelligent load shedding system over conventional load shedding methods from the design and operation perspectives. Index Terms — Load Shedding (LS), Intelligent Load Shedding (ILS), Power System Transient Stability, Frequency Relay, Programmable Logic Controller (PLC), Power Management System",
"title": ""
},
{
"docid": "819f5df03cebf534a51eb133cd44cb0d",
"text": "Although DBP (di-n-butyl phthalate) is commonly encountered as an artificially-synthesized plasticizer with potential to impair fertility, we confirm that it can also be biosynthesized as microbial secondary metabolites from naturally occurring filamentous fungi strains cultured either in an artificial medium or natural water. Using the excreted crude enzyme from the fungi for catalyzing a variety of substrates, we found that the fungal generation of DBP was largely through shikimic acid pathway, which was assembled by phthalic acid with butyl alcohol through esterification. The DBP production ability of the fungi was primarily influenced by fungal spore density and incubation temperature. This study indicates an important alternative natural waterborne source of DBP in addition to artificial synthesis, which implied fungal contribution must be highlighted for future source control and risk management of DBP.",
"title": ""
},
{
"docid": "e99c8800033f33caa936a6ff8dd79995",
"text": "Terms of service of on-line platforms too often contain clauses that are potentially unfair to the consumer. We present an experimental study where machine learning is employed to automatically detect such potentially unfair clauses. Results show that the proposed system could provide a valuable tool for lawyers and consumers alike.",
"title": ""
},
{
"docid": "ec9c15e543444e88cc5d636bf1f6e3b9",
"text": "Which ZSL method is more robust to GZSL? An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild Wei-Lun Chao*1, Soravit Changpinyo*1, Boqing Gong2, and Fei Sha1,3 1U. of Southern California, 2U. of Central Florida, 3U. of California, Los Angeles NSF IIS-1566511, 1065243, 1451412, 1513966, 1208500, CCF-1139148, USC Graduate Fellowship, a Google Research Award, an Alfred P. Sloan Research Fellowship and ARO# W911NF-12-1-0241 and W911NF-15-1-0484.",
"title": ""
},
{
"docid": "e68992d53fa5bac20f8a4f17d72c7d0d",
"text": "In the field of pattern recognition, data analysis, and machine learning, data points are usually modeled as high-dimensional vectors. Due to the curse-of-dimensionality, it is non-trivial to efficiently process the orginal data directly. Given the unique properties of nonlinear dimensionality reduction techniques, nonlinear learning methods are widely adopted to reduce the dimension of data. However, existing nonlinear learning methods fail in many real applications because of the too-strict requirements (for real data) or the difficulty in parameters tuning. Therefore, in this paper, we investigate the manifold learning methods which belong to the family of nonlinear dimensionality reduction methods. Specifically, we proposed a new manifold learning principle for dimensionality reduction named Curved Cosine Mapping (CCM). Based on the law of cosines in Euclidean space, CCM applies a brand new mapping pattern to manifold learning. In CCM, the nonlinear geometric relationships are obtained by utlizing the law of cosines, and then quantified as the dimensionality-reduced features. Compared with the existing approaches, the model has weaker theoretical assumptions over the input data. Moreover, to further reduce the computation cost, an optimized version of CCM is developed. Finally, we conduct extensive experiments over both artificial and real-world datasets to demonstrate the performance of proposed techniques.",
"title": ""
},
{
"docid": "e770120d43a03e9b43d7de4d47f9a2eb",
"text": "Twitter is an online social networking service on which users worldwide publish their opinions on a variety of topics, discuss current issues, complain, and express many kinds of emotions. Therefore, Twitter is a rich source of data for opinion mining, sentiment and emotion analysis. This paper focuses on this issue by analysing symbols called emotion tokens, including emotion symbols (e.g. emoticons and emoji ideograms). According to observations, emotion tokens are commonly used in many tweets. They directly express one’s emotions regardless of his/her language, hence they have become a useful signal for sentiment analysis in multilingual tweets. The paper describes the approach to extending existing binary sentiment classification approaches using a multi-way emotions classification.",
"title": ""
},
{
"docid": "66a6e9bbdd461fa85a0a09ec1ceb2031",
"text": "BACKGROUND\nConverging evidence indicates a functional disruption in the neural systems for reading in adults with dyslexia. We examined brain activation patterns in dyslexic and nonimpaired children during pseudoword and real-word reading tasks that required phonologic analysis (i.e., tapped the problems experienced by dyslexic children in sounding out words).\n\n\nMETHODS\nWe used functional magnetic resonance imaging (fMRI) to study 144 right-handed children, 70 dyslexic readers, and 74 nonimpaired readers as they read pseudowords and real words.\n\n\nRESULTS\nChildren with dyslexia demonstrated a disruption in neural systems for reading involving posterior brain regions, including parietotemporal sites and sites in the occipitotemporal area. Reading skill was positively correlated with the magnitude of activation in the left occipitotemporal region. Activation in the left and right inferior frontal gyri was greater in older compared with younger dyslexic children.\n\n\nCONCLUSIONS\nThese findings provide neurobiological evidence of an underlying disruption in the neural systems for reading in children with dyslexia and indicate that it is evident at a young age. The locus of the disruption places childhood dyslexia within the same neurobiological framework as dyslexia, and acquired alexia, occurring in adults.",
"title": ""
}
] |
scidocsrr
|
87692edca81182c14462fe3465d18bf2
|
Mobile activity recognition for a whole day: recognizing real nursing activities with big dataset
|
[
{
"docid": "e700afa9064ef35f7d7de40779326cb0",
"text": "Human activity recognition is important for many applications. This paper describes a human activity recognition framework based on feature selection techniques. The objective is to identify the most important features to recognize human activities. We first design a set of new features (called physical features) based on the physical parameters of human motion to augment the commonly used statistical features. To systematically analyze the impact of the physical features on the performance of the recognition system, a single-layer feature selection framework is developed. Experimental results indicate that physical features are always among the top features selected by different feature selection methods and the recognition accuracy is generally improved to 90%, or 8% better than when only statistical features are used. Moreover, we show that the performance is further improved by 3.8% by extending the single-layer framework to a multi-layer framework which takes advantage of the inherent structure of human activities and performs feature selection and classification in a hierarchical manner.",
"title": ""
},
{
"docid": "7786fac57e0c1392c6a5101681baecb0",
"text": "We deployed 72 sensors of 10 modalities in 15 wireless and wired networked sensor systems in the environment, in objects, and on the body to create a sensor-rich environment for the machine recognition of human activities. We acquired data from 12 subjects performing morning activities, yielding over 25 hours of sensor data. We report the number of activity occurrences observed during post-processing, and estimate that over 13000 and 14000 object and environment interactions occurred. We describe the networked sensor setup and the methodology for data acquisition, synchronization and curation. We report on the challenges and outline lessons learned and best practice for similar large scale deployments of heterogeneous networked sensor systems. We evaluate data acquisition quality for on-body and object integrated wireless sensors; there is less than 2.5% packet loss after tuning. We outline our use of the dataset to develop new sensor network self-organization principles and machine learning techniques for activity recognition in opportunistic sensor configurations. Eventually this dataset will be made public.",
"title": ""
}
] |
[
{
"docid": "a4457f8c560d65a80cb03209b4a0a380",
"text": "Purpose – Fundamentally, the success of schools depends on first-rate school leadership, on leaders reinforcing the teachers’ willingness to adhere to the school’s vision, creating a sense of purpose, binding them together and encouraging them to engage in continuous learning. Leadership, vision and organizational learning are considered to be the key to school improvement. However, systematic empirical evidence of a direct relationship between leadership, vision and organizational learning is limited. The present study aims to explore the influence of principals’ leadership style on school organizational learning, using school vision as a mediator. Design/methodology/approach – The data were collected from 1,474 teachers at 104 elementary schools in northern Israel, and aggregated to the school level. Findings – Mediating regression analysis demonstrated that the school vision was a significant predictor of school organizational learning and functioned as a partial mediator only between principals’ transformational leadership style and school organizational learning. Moreover, the principals’ transformational leadership style predicted school organizational vision and school organizational learning processes. In other words, school vision, as shaped by the principal and the staff, is a powerful motivator of the process of organizational learning in school. Research implications/limitations – The research results have implications for the guidance of leadership practice, training, appraisal and professional development. Originality/value – The paper explores the centrality of school vision and its effects on the achievement of the school’s aims by means of organizational learning processes.",
"title": ""
},
{
"docid": "ee0e4dda5654896a27fa6525c23199cc",
"text": "This paper addresses the task of designing a modular neural network architecture that jointly solves different tasks. As an example we use the tasks of depth estimation and semantic segmentation given a single RGB image. The main focus of this work is to analyze the cross-modality influence between depth and semantic prediction maps on their joint refinement. While most of the previous works solely focus on measuring improvements in accuracy, we propose a way to quantify the cross-modality influence. We show that there is a relationship between final accuracy and cross-modality influence, although not a simple linear one. Hence a larger cross-modality influence does not necessarily translate into an improved accuracy. We find that a beneficial balance between the cross-modality influences can be achieved by network architecture and conjecture that this relationship can be utilized to understand different network design choices. Towards this end we propose a Convolutional Neural Network (CNN) architecture that fuses the state-of-the-art results for depth estimation and semantic labeling. By balancing the cross-modality influences between depth and semantic prediction, we achieve improved results for both tasks using the NYU-Depth v2 benchmark.",
"title": ""
},
{
"docid": "c34b6fac632c05c73daee2f0abce3ae8",
"text": "OBJECTIVES\nUnilateral strength training produces an increase in strength of the contralateral homologous muscle group. This process of strength transfer, known as cross education, is generally attributed to neural adaptations. It has been suggested that unilateral strength training of the free limb may assist in maintaining the functional capacity of an immobilised limb via cross education of strength, potentially enhancing recovery outcomes following injury. Therefore, the purpose of this review is to examine the impact of immobilisation, the mechanisms that may contribute to cross education, and possible implications for the application of unilateral training to maintain strength during immobilisation.\n\n\nDESIGN\nCritical review of literature.\n\n\nMETHODS\nSearch of online databases.\n\n\nRESULTS\nImmobilisation is well known for its detrimental effects on muscular function. Early reductions in strength outweigh atrophy, suggesting a neural contribution to strength loss, however direct evidence for the role of the central nervous system in this process is limited. Similarly, the precise neural mechanisms responsible for cross education strength transfer remain somewhat unknown. Two recent studies demonstrated that unilateral training of the free limb successfully maintained strength in the contralateral immobilised limb, although the role of the nervous system in this process was not quantified.\n\n\nCONCLUSIONS\nCross education provides a unique opportunity for enhancing rehabilitation following injury. By gaining an understanding of the neural adaptations occurring during immobilisation and cross education, future research can utilise the application of unilateral training in clinical musculoskeletal injury rehabilitation.",
"title": ""
},
{
"docid": "4b3576e6451fa78886ce440e55b04979",
"text": "In this paper, we model the document revision detection problem as a minimum cost branching problem that relies on computing document distances. Furthermore, we propose two new document distance measures, word vector-based Dynamic Time Warping (wDTW) and word vector-based Tree Edit Distance (wTED). Our revision detection system is designed for a large scale corpus and implemented in Apache Spark. We demonstrate that our system can more precisely detect revisions than state-of-the-art methods by utilizing the Wikipedia revision dumps 1 and simulated data sets.",
"title": ""
},
{
"docid": "af78c57378a472c8f7be4eb354feb442",
"text": "Mutations in the human sonic hedgehog gene ( SHH) are the most frequent cause of autosomal dominant inherited holoprosencephaly (HPE), a complex brain malformation resulting from incomplete cleavage of the developing forebrain into two separate hemispheres and ventricles. Here we report the clinical and molecular findings in five unrelated patients with HPE and their relatives with an identified SHH mutation. Three new and one previously reported SHH mutations were identified, a fifth proband was found to carry a reciprocal subtelomeric rearrangement involving the SHH locus in 7q36. An extremely wide intrafamilial phenotypic variability was observed, ranging from the classical phenotype with alobar HPE accompanied by typical severe craniofacial abnormalities to very mild clinical signs of choanal stenosis or solitary median maxillary central incisor (SMMCI) only. Two families were initially ascertained because of microcephaly in combination with developmental delay and/or mental retardation and SMMCI, the latter being a frequent finding in patients with an identified SHH mutation. In other affected family members a delay in speech acquisition and learning disabilities were the leading clinical signs. Conclusion: mutational analysis of the sonic hedgehog gene should not only be considered in patients presenting with the classical holoprosencephaly phenotype but also in those with two or more clinical signs of the wide phenotypic spectrum of associated abnormalities, especially in combination with a positive family history.",
"title": ""
},
{
"docid": "13d7ccd473e5db8fabdf4af18688774f",
"text": "Aortopathies pose a significant healthcare burden due to excess early mortality, increasing incidence, and underdiagnosis. Understanding the underlying genetic causes, early diagnosis, timely surveillance, prophylactic repair, and family screening are keys to addressing these diseases. Next-generation sequencing continues to expand our understanding of the genetic causes of heritable aortopathies, rapidly clarifying their underlying molecular pathophysiology and suggesting new potential therapeutic targets. This review will summarize the pathogenetic mechanisms and management of heritable genetic aortopathies with attention to specific forms of both syndromic and nonsyndromic disorders, including Marfan syndrome, Loeys-Dietz syndrome, vascular Ehlers-Danlos syndrome, and familial thoracic aortic aneurysm and dissection.",
"title": ""
},
{
"docid": "5905846f7763039d4f89fcb0b05c66fe",
"text": "This review presents and discusses the contribution of machine learning techniques for diagnosis and disease monitoring in the context of clinical vision science. Many ocular diseases leading to blindness can be halted or delayed when detected and treated at its earliest stages. With the recent developments in diagnostic devices, imaging and genomics, new sources of data for early disease detection and patients' management are now available. Machine learning techniques emerged in the biomedical sciences as clinical decision-support techniques to improve sensitivity and specificity of disease detection and monitoring, increasing objectively the clinical decision-making process. This manuscript presents a review in multimodal ocular disease diagnosis and monitoring based on machine learning approaches. In the first section, the technical issues related to the different machine learning approaches will be present. Machine learning techniques are used to automatically recognize complex patterns in a given dataset. These techniques allows creating homogeneous groups (unsupervised learning), or creating a classifier predicting group membership of new cases (supervised learning), when a group label is available for each case. To ensure a good performance of the machine learning techniques in a given dataset, all possible sources of bias should be removed or minimized. For that, the representativeness of the input dataset for the true population should be confirmed, the noise should be removed, the missing data should be treated and the data dimensionally (i.e., the number of parameters/features and the number of cases in the dataset) should be adjusted. The application of machine learning techniques in ocular disease diagnosis and monitoring will be presented and discussed in the second section of this manuscript. To show the clinical benefits of machine learning in clinical vision sciences, several examples will be presented in glaucoma, age-related macular degeneration, and diabetic retinopathy, these ocular pathologies being the major causes of irreversible visual impairment.",
"title": ""
},
{
"docid": "886924ad0c7b354c1ac8aec3955639cc",
"text": "Collaborative filtering is one of the most successful and extensive methods used by recommender systems for predicting the preferences of users. However, traditional collaborative filtering only uses rating information to model the user, the data sparsity problem and the cold start problem will severely reduce the recommendation performance. To overcome these problems, we propose two neural network models to improve recommendations. The first one called TDAE uses a denoising autoencoder to integrate the ratings and the explicit trust relationships between users in the social networks in order to model the preferences of users more accurately. However, the explicit trust information is very sparse, which limits the performance of this model. Therefore, we propose a second method called TDAE++ for extracting the implicit trust relationships between users with similarity measures, where we employ both the explicit and implicit trust information together to improve the quality of recommendations. Finally, we inject the trust information into both the input and the hidden layer in order to fuse these two types of different information to learn more reliable semantic representations of users. Comprehensive experiments based on three popular data sets verify that our proposed models perform better than other state-of-the-art approaches in common recommendation tasks.",
"title": ""
},
{
"docid": "c89a7027de2362aa1bfe64b084073067",
"text": "This paper considers pick-and-place tasks using aerial vehicles equipped with manipulators. The main focus is on the development and experimental validation of a nonlinear model-predictive control methodology to exploit the multi-body system dynamics and achieve optimized performance. At the core of the approach lies a sequential Newton method for unconstrained optimal control and a high-frequency low-level controller tracking the generated optimal reference trajectories. A low cost quadrotor prototype with a simple manipulator extending more than twice the radius of the vehicle is designed and integrated with an on-board vision system for object tracking. Experimental results show the effectiveness of model-predictive control to motivate the future use of real-time optimal control in place of standard ad-hoc gain scheduling techniques.",
"title": ""
},
{
"docid": "216d4c4dc479588fb91a27e35b4cb403",
"text": "At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices.\n We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and PageRank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%.",
"title": ""
},
{
"docid": "264d5db966f9cbed6b128087c7e3761e",
"text": "We study auction mechanisms for sharing spectrum among a group of users, subject to a constraint on the interference temperature at a measurement point. The users access the channel using spread spectrum signaling and so interfere with each other. Each user receives a utility that is a function of the received signal-to-interference plus noise ratio. We propose two auction mechanisms for allocating the received power. The first is an auction in which users are charged for received SINR, which, when combined with logarithmic utilities, leads to a weighted max-min fair SINR allocation. The second is an auction in which users are charged for power, which maximizes the total utility when the bandwidth is large enough and the receivers are co-located. Both auction mechanisms are shown to be socially optimal for a limiting “large system” with co-located receivers, where bandwidth, power and the number of users are increased in fixed proportion. We also formulate an iterative and distributed bid updating algorithm, and specify conditions under which this algorithm converges globally to the Nash equilibrium of the auction.",
"title": ""
},
{
"docid": "d4acd79e2fdbc9b87b2dbc6ebfa2dd43",
"text": "Airbnb, an online marketplace for accommodations, has experienced a staggering growth accompanied by intense debates and scattered regulations around the world. Current discourses, however, are largely focused on opinions rather than empirical evidences. Here, we aim to bridge this gap by presenting the first large-scale measurement study on Airbnb, using a crawled data set containing 2.3 million listings, 1.3 million hosts, and 19.3 million reviews. We measure several key characteristics at the heart of the ongoing debate and the sharing economy. Among others, we find that Airbnb has reached a global yet heterogeneous coverage. The majority of its listings across many countries are entire homes, suggesting that Airbnb is actually more like a rental marketplace rather than a spare-room sharing platform. Analysis on star-ratings reveals that there is a bias toward positive ratings, amplified by a bias toward using positive words in reviews. The extent of such bias is greater than Yelp reviews, which were already shown to exhibit a positive bias. We investigate a key issue - commercial hosts who own multiple listings on Airbnb - repeatedly discussed in the current debate. We find that their existence is prevalent, they are early movers towards joining Airbnb, and their listings are disproportionately entire homes and located in the US. Our work advances the current understanding of how Airbnb is being used and may serve as an independent and empirical reference to inform the debate.",
"title": ""
},
{
"docid": "b16b04f55e7d2ce4f0ba86eb7c0a1996",
"text": "Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced. In this Opinion article, Hosny et al. discuss the application of artificial intelligence to image-based tasks in the field of radiology and consider the advantages and challenges of its clinical implementation.",
"title": ""
},
{
"docid": "f9cd0767487bd46e760d9e3adb35f1fd",
"text": "In this paper, the exciton transport properties of an octa(butyl)-substituted metal-free phthalocyanine (H2-OBPc) molecular crystal have been explored by means of a combined computational (molecular dynamics and electronic structure calculations) and theoretical (model Hamiltonian) approximation. The excitonic couplings in phthalocyanines, where multiple quasi-degenerate excited states are present in the isolated chromophore, are computed with a multistate diabatization scheme which is able to capture both shortand long-range excitonic coupling effects. Thermal motions in phthalocyanine molecular crystals at room temperature cause substantial fluctuation of the excitonic couplings between neighboring molecules (dynamic disorder). The average values of the excitonic couplings are found to be not much smaller than the reorganization energy for the excitation energy transfer and the commonly assumed incoherent regime for this class of materials cannot be invoked. A simple but realistic model Hamiltonian is proposed to study the exciton dynamics in phthalocyanine molecular crystals or aggregates beyond the incoherent regime.",
"title": ""
},
{
"docid": "a46460113926b688f144ddec74e03918",
"text": "The authors describe a new self-report instrument, the Inventory of Depression and Anxiety Symptoms (IDAS), which was designed to assess specific symptom dimensions of major depression and related anxiety disorders. They created the IDAS by conducting principal factor analyses in 3 large samples (college students, psychiatric patients, community adults); the authors also examined the robustness of its psychometric properties in 5 additional samples (high school students, college students, young adults, postpartum women, psychiatric patients) who were not involved in the scale development process. The IDAS contains 10 specific symptom scales: Suicidality, Lassitude, Insomnia, Appetite Loss, Appetite Gain, Ill Temper, Well-Being, Panic, Social Anxiety, and Traumatic Intrusions. It also includes 2 broader scales: General Depression (which contains items overlapping with several other IDAS scales) and Dysphoria (which does not). The scales (a) are internally consistent, (b) capture the target dimensions well, and (c) define a single underlying factor. They show strong short-term stability and display excellent convergent validity and good discriminant validity in relation to other self-report and interview-based measures of depression and anxiety.",
"title": ""
},
{
"docid": "ecd99c9f87e1c5e5f529cb5fcbb206f2",
"text": "The concept of supply chain is about managing coordinated information and material flows, plant operations, and logistics. It provides flexibility and agility in responding to consumer demand shifts without cost overlays in resource utilization. The fundamental premise of this philosophy is; synchronization among multiple autonomous business entities represented in it. That is, improved coordination within and between various supply-chain members. Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decision-making processes, and improvement in the overall performance of each member as well as the supply chain. Describes architecture to create the appropriate structure, install proper controls, and implement principles of optimization to synchronize the supply chain. A supply-chain model based on a collaborative system approach is illustrated utilizing the example of the textile industry. process flexibility and coordination of processes across many sites. More and more organizations are promoting employee empowerment and the need for rules-based, real-time decision support systems to attain organizational and process flexibility, as well as to respond to competitive pressure to introduce new products more quickly, cheaply and of improved quality. The underlying philosophy of managing supply chains has evolved to respond to these changing business trends. Supply-chain management phenomenon has received the attention of researchers and practitioners in various topics. In the earlier years, the emphasis was on materials planning utilizing materials requirements planning techniques, inventory logistics management with one warehouse multi-retailer distribution system, and push and pull operation techniques for production systems. In the last few years, however, there has been a renewed interest in designing and implementing integrated systems, such as enterprise resource planning, multi-echelon inventory, and synchronous-flow manufacturing, respectively. A number of factors have contributed to this shift. First, there has been a realization that better planning and management of complex interrelated systems, such as materials planning, inventory management, capacity planning, logistics, and production systems will lead to overall improvement in enterprise productivity. Second, advances in information and communication technologies complemented by sophisticated decision support systems enable the designing, implementing and controlling of the strategic and tactical strategies essential to delivery of integrated systems. In the next section, a framework that offers an unified approach to dealing with enterprise related problems is presented. A framework for analysis of enterprise integration issues As mentioned in the preceding section, the availability of advanced production and logistics management systems has the potential of fundamentally influencing enterprise integration issues. The motivation in pursuing research issues described in this paper is to propose a framework that enables dealing with these effectively. The approach suggested in this paper utilizing supply-chain philosophy for enterprise integration proposes domain independent problem solving and modeling, and domain dependent analysis and implementation. The purpose of the approach is to ascertain characteristics of the problem independent of the specific problem environment. Consequently, the approach delivers solution(s) or the solution method that are intrinsic to the problem and not its environment. Analysis methods help to understand characteristics of the solution methodology, as well as providing specific guarantees of effectiveness. Invariably, insights gained from these analyses can be used to develop effective problem solving tools and techniques for complex enterprise integration problems. The discussion of the framework is organized as follows. First, the key guiding principles of the proposed framework on which a supply chain ought to be built are outlined. Then, a cooperative supply-chain (CSC) system is described as a special class of a supply-chain network implementation. Next, discussion on a distributed problemsolving strategy that could be employed in integrating this type of system is presented. Following this, key components of a CSC system are described. Finally, insights on modeling a CSC system are offered. Key modeling principles are elaborated through two distinct modeling approaches in the management science discipline. Supply chain guiding principles Firms have increasingly been adopting enterprise/supply-chain management techniques in order to deal with integration issues. To focus on these integration efforts, the following guiding principles for the supply-chain framework are proposed. These principles encapsulate trends in production and logistics management that a supplychain arrangement may be designed to capture. . Supply chain is a cooperative system. The supply-chain arrangement exists on cooperation among its members. Cooperation occurs in many forms, such as sharing common objectives and goals for the group entity; utilizing joint policies, for instance in marketing and production; setting up common budgets, cost and price structures; and identifying commitments on capacity, production plans, etc. . Supply chain exists on the group dynamics of its members. The existence of a supply chain is dependent on the interaction among its members. This interaction occurs in the form of exchange of information with regard to input, output, functions and controls, such as objectives and goals, and policies. By analyzing this [ 291 ] Charu Chandra and Sameer Kumar Enterprise architectural framework for supply-chain integration Industrial Management & Data Systems 101/6 [2001] 290±303 information, members of a supply chain may choose to modify their behavior attuned with group expectations. . Negotiation and compromise are norms of operation in a supply chain. In order to realize goals and objectives of the group, members negotiate on commitments made to one another for price, capacity, production plans, etc. These negotiations often lead to compromises by one or many members on these issues, leading up to realization of sub-optimal goals and objectives by members. . Supply-chain system solutions are Paretooptimal (satisficing), not optimizing. Supply-chain problems similar to many real world applications involve several objective functions of its members simultaneously. In all such applications, it is extremely rare to have one feasible solution that simultaneously optimizes all of the objective functions. Typically, optimizing one of the objective functions has the effect of moving another objective function away from its most desirable value. These are the usual conflicts among the objective functions in the multiobjective models. As a multi-objective problem, the supply-chain model produces non-dominated or Pareto-optimal solutions. That is, solutions for a supplychain problem do not leave any member worse-off at the expense of another. . Integration in supply chain is achieved through synchronization. Integration across the supply chain is achieved through synchronization of activities at the member entity and aggregating its impact through process, function, business, and on to enterprise levels, either at the member entity or the group entity. Thus, by synchronization of supply-chain components, existing bottlenecks in the system are eliminated, while future ones are prevented from occurring. A cooperative supply-chain A supply-chain network depicted in Figure 1 can be a complex web of systems, sub-systems, operations, activities, and their relationships to one another, belonging to its various members namely, suppliers, carriers, manufacturing plants, distribution centers, retailers, and consumers. The design, modeling and implementation of such a system, therefore, can be difficult, unless various parts of it are cohesively tied to the whole. The concept of a supply-chain is about managing coordinated information and material flows, plant operations, and logistics through a common set of principles, strategies, policies, and performance metrics throughout its developmental life cycle (Lee and Billington, 1993). It provides flexibility and agility in responding to consumer demand shifts with minimum cost overlays in resource utilization. The fundamental premise of this philosophy is synchronization among multiple autonomous entities represented in it. That is, improved coordination within and between various supply-chain members. Coordination is achieved within the framework of commitments made by members to each other. Members negotiate and compromise in a spirit of cooperation in order to meet these commitments. Hence, the label(CSC). Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decisionmaking processes, and improvement in the overall performance of each member, as well as the supply-chain (group) (Chandra, 1997; Poirier, 1999; Tzafastas and Kapsiotis, 1994). A generic textile supply chain has for its primary raw material vendors, cotton growers and/or chemical suppliers, depending upon whether the end product is cotton, polyester or some combination of cotton and polyester garment. Secondary raw material vendors are suppliers of accessories such as, zippers, buttons, thread, garment tags, etc. Other tier suppliers in the complete pipeline are: fiber manufacturers for producing the polyester or cotton fiber yarn; textile manufacturers for weaving and dying yarn into colored textile fabric; an apparel maker for cutting, sewing and packing the garment; a distribution center for merchandising the garment; and a retailer selling the brand name garment to consumers at a shopping mall or center. Synchronization of the textile supply chain is achieved through coordination primarily of: . replenishment schedules that have be",
"title": ""
},
{
"docid": "0aeb9567ed3ddf5ca7f33725fb5aa310",
"text": "Code-reuse attacks based on return oriented programming are among the most popular exploitation techniques used by attackers today. Few practical defenses are able to stop such attacks on arbitrary binaries without access to source code. A notable exception are the techniques that employ new hardware, such as Intel’s Last Branch Record (LBR) registers, to track all indirect branches and raise an alert when a sensitive system call is reached by means of too many indirect branches to short gadgets—under the assumption that such gadget chains would be indicative of a ROP attack. In this paper, we evaluate the implications. What is “too many” and how short is “short”? Getting the thresholds wrong has serious consequences. In this paper, we show by means of an attack on Internet Explorer that while current defenses based on these techniques raise the bar for exploitation, they can be bypassed. Conversely, tuning the thresholds to make the defenses more aggressive, may flag legitimate program behavior as an attack. We analyze the problem in detail and show that determining the right values is difficult.",
"title": ""
},
{
"docid": "745562de56499ff0030f35afa8d84b7f",
"text": "This paper will show how the accuracy and security of SCADA systems can be improved by using anomaly detection to identify bad values caused by attacks and faults. The performance of invariant induction and ngram anomaly-detectors will be compared and this paper will also outline plans for taking this work further by integrating the output from several anomalydetecting techniques using Bayesian networks. Although the methods outlined in this paper are illustrated using the data from an electricity network, this research springs from a more general attempt to improve the security and dependability of SCADA systems using anomaly detection.",
"title": ""
},
{
"docid": "c3f23cf5015e35dfd4b10254984bf0d4",
"text": "We investigate the applicability of passive RFID systems to the task of identifying multiple tagged objects simultaneously, assuming that the number of tags is not known in advance. We present a combinatorial model of the communication mechanism between the reader device and the tags, and use this model to derive the optimal parameter setting for the reading process, based on estimates for the number of tags. Some results on the performance of an implementation are presented. Keywords— RFID, collision-resolution, tagging, combinatorics.",
"title": ""
},
{
"docid": "553e476ad6a0081aed01775f995f4d16",
"text": "This document describes the findings of the Second Workshop on Neural Machine Translation and Generation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2018). First, we summarize the research trends of papers presented in the proceedings, and note that there is particular interest in linguistic structure, domain adaptation, data augmentation, handling inadequate resources, and analysis of models. Second, we describe the results of the workshop’s shared task on efficient neural machine translation (NMT), where participants were tasked with creating NMT systems that are both accurate and efficient.",
"title": ""
}
] |
scidocsrr
|
a4910b4ce61bba86adcdeabfa3c21b18
|
Dynamic Factor Models
|
[
{
"docid": "f3f27a324736617f20abbf2ffd806f6d",
"text": "516",
"title": ""
}
] |
[
{
"docid": "79c331cf08ebecf8de5809dfd6ab74d9",
"text": "Geographical Information System (GIS) and Global Positioning System (GPS) technologies are expanding their traditional applications to embrace a stream of consumer-focused, location-based applications. Through an integration with handheld devices capable of wireless communication and mobile computing, a wide range of what might be generically referred to as \"Location-Based Services\" (LBS) may be offered to mobile users. A location-based service is able to provide targetted spatial information to mobile workers and consumers. These include utility location information, personal or asset tracking, concierge and routeguidance information, to name just a few of the possible LBS. The technologies and applications of LBS will play an ever increasingly important role in the modern, mobile, always-connected society. This paper endeavours to provide some background to the technology underlying location-based services and to discuss some issues related to developing and launching LBS. These include whether wireless mobile technologies are ready to support LBS, which mobile positioning technologies can be used and what are their shortcomings, and how GIS developers manipulate spatial information to generate appropriate map images on mobile devices (such as cell phones and PDAs). In addition the authors discuss such issues as interoperability, privacy protection and the market demand for LBS.",
"title": ""
},
{
"docid": "d7bc62e7fca922f9b97e42deff85d010",
"text": "In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing highquality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedbackbased concept selection in the ILP setup in order to maximize the user-desired content in the summary.",
"title": ""
},
{
"docid": "d1444f26cee6036f1c2df67a23d753be",
"text": "Text mining has becoming an emerging research area in now-a-days that helps to extract useful information from large amount of natural language text documents. The need of grouping similar documents together for different applications has gaining the attention of researchers in this area. Document clustering organizes the documents into different groups called as clusters. The documents in one cluster have higher degree of similarity than the documents in other cluster. The paper provides an overview of the document clustering reviewed from different papers and the challenges in document clustering. KeywordsText Mining, Document Clustering, Similarity Measures, Challenges in Document Clustering",
"title": ""
},
{
"docid": "1a78e17056cca09250c7cc5f81fb271b",
"text": "This paper presents a lightweight stereo vision-based driving lane detection and classification system to achieve the ego-car’s lateral positioning and forward collision warning to aid advanced driver assistance systems (ADAS). For lane detection, we design a self-adaptive traffic lanes model in Hough Space with a maximum likelihood angle and dynamic pole detection region of interests (ROIs), which is robust to road bumpiness, lane structure changing while the ego-car’s driving and interferential markings on the ground. What’s more, this model can be improved with geographic information system or electronic map to achieve more accurate results. Besides, the 3-D information acquired by stereo matching is used to generate an obstacle mask to reduce irrelevant objects’ interfere and detect forward collision distance. For lane classification, a convolutional neural network is trained by using manually labeled ROI from KITTI data set to classify the left/right-side line of host lane so that we can provide significant information for lane changing strategy making in ADAS. Quantitative experimental evaluation shows good true positive rate on lane detection and classification with a real-time (15Hz) working speed. Experimental results also demonstrate a certain level of system robustness on variation of the environment.",
"title": ""
},
{
"docid": "e02310e36b8306e3f033830447af2f1e",
"text": "This paper suggests the need for a software engineering research community conversation about the future that the community would like to have. The paper observes that the research directions the community has taken in the past, dating at least back to the formative NATO Conferences in the late 1960's, have been driven largely by desire to meet the needs of practice. The paper suggests that the community should discuss whether it is now appropriate to balance this problem-solving-oriented research with a stronger complement of curiosity-driven research. This paper does not advocate what that balance should be. Neither does it advocate what curiosity driven research topics should be pursued (although illustrative examples are offered). It does does advocate the need for a community conversation about these questions.",
"title": ""
},
{
"docid": "37437fb45a309bc887ee68da304ec370",
"text": "We introduce WebGazer, an online eye tracker that uses common webcams already present in laptops and mobile devices to infer the eye-gaze locations of web visitors on a page in real time. The eye tracking model self-calibrates by watching web visitors interact with the web page and trains a mapping between features of the eye and positions on the screen. This approach aims to provide a natural experience to everyday users that is not restricted to laboratories and highly controlled user studies. WebGazer has two key components: a pupil detector that can be combined with any eye detection library, and a gaze estimator using regression analysis informed by user interactions. We perform a large remote online study and a small in-person study to evaluate WebGazer. The findings show that WebGazer can learn from user interactions and that its accuracy is sufficient for approximating the user’s gaze. As part of this paper, we release the first eye tracking library that can be easily integrated in any website for real-time gaze interactions, usability studies, or web research.",
"title": ""
},
{
"docid": "f63da8e7659e711bcb7a148ea12a11f2",
"text": "We have presented two CCA-based approaches for data fusion and group analysis of biomedical imaging data and demonstrated their utility on fMRI, sMRI, and EEG data. The results show that CCA and M-CCA are powerful tools that naturally allow the analysis of multiple data sets. The data fusion and group analysis methods presented are completely data driven, and use simple linear mixing models to decompose the data into their latent components. Since CCA and M-CCA are based on second-order statistics they provide a relatively lessstrained solution as compared to methods based on higherorder statistics such as ICA. While this can be advantageous, the flexibility also tends to lead to solutions that are less sparse than those obtained using assumptions of non-Gaussianity-in particular superGaussianity-at times making the results more difficult to interpret. Thus, it is important to note that both approaches provide complementary perspectives, and hence it is beneficial to study the data using different analysis techniques.",
"title": ""
},
{
"docid": "ae4974a3d7efedab7cd6651101987e79",
"text": "Fisher Kernels and Deep Learning were two developments with significant impact on large-scale object categorization in the last years. Both approaches were shown to achieve state-of-the-art results on large-scale object categorization datasets, such as ImageNet. Conceptually, however, they are perceived as very different and it is not uncommon for heated debates to spring up when advocates of both paradigms meet at conferences or workshops. In this work, we emphasize the similarities between both architectures rather than their differences and we argue that such a unified view allows us to transfer ideas from one domain to the other. As a concrete example we introduce a method for learning a support vector machine classifier with Fisher kernel at the same time as a task-specific data representation. We reinterpret the setting as a multi-layer feed forward network. Its final layer is the classifier, parameterized by a weight vector, and the two previous layers compute Fisher vectors, parameterized by the coefficients of a Gaussian mixture model. We introduce a gradient descent based learning algorithm that, in contrast to other feature learning techniques, is not just derived from intuition or biological analogy, but has a theoretical justification in the framework of statistical learning theory. Our experiments show that the new training procedure leads to significant improvements in classification accuracy while preserving the modularity and geometric interpretability of a support vector machine setup.",
"title": ""
},
{
"docid": "aea15d034420b3567cf253c80c39301f",
"text": "This paper describes the application of the numerical 3D-modelling and scattering analysis embedded into systems simulations for special applied cases. The effects of helicopter rotors on a DVOR-system and the effects of wind turbines WT on DVOR and on some radar systems (primary radar, weather radar) by its forward scattering are evaluated. The evaluation are analyzed along operational aspects, such as the radar rotation and the averaging in case of the weather radar due to the volume targets.",
"title": ""
},
{
"docid": "bcda82b5926620060f65506ccbac042f",
"text": "This paper investigates spirolaterals for their beauty of form and the unexpected complexity arising from them. From a very simple generative procedure, spirolaterals can be created having great complexity and variation. Using mathematical and computer-based methods, issues of closure, variation, enumeration, and predictictability are discussed. A historical review is also included. The overriding interest in this research is to develop methods and procedures to investigate geometry for the purpose of inspiration for new architectural and sculptural forms. This particular phase will concern the two dimensional representations of spirolaterals.",
"title": ""
},
{
"docid": "c132272c8caa7158c0549bd5f2d626aa",
"text": "This study investigates alternative material compositions for flexible silicone-based dry electroencephalography (EEG) electrodes to improve the performance lifespan while maintaining high-fidelity transmission of EEG signals. Electrode materials were fabricated with varying concentrations of silver-coated silica and silver flakes to evaluate their electrical, mechanical, and EEG transmission performance. Scanning electron microscope (SEM) analysis of the initial electrode development identified some weak points in the sensors' construction, including particle pull-out and ablation of the silver coating on the silica filler. The newly-developed sensor materials achieved significant improvement in EEG measurements while maintaining the advantages of previous silicone-based electrodes, including flexibility and non-toxicity. The experimental results indicated that the proposed electrodes maintained suitable performance even after exposure to temperature fluctuations, 85% relative humidity, and enhanced corrosion conditions demonstrating improvements in the environmental stability. Fabricated flat (forehead) and acicular (hairy sites) electrodes composed of the optimum identified formulation exhibited low impedance and reliable EEG measurement; some initial human experiments demonstrate the feasibility of using these silicone-based electrodes for typical lab data collection applications.",
"title": ""
},
{
"docid": "6a1da115f887498370b400efa6e57ed0",
"text": "Local search heuristics for non-convex optimizations are popular in applied machine learning. However, in general it is hard to guarantee that such algorithms even converge to a local minimum, due to the existence of complicated saddle point structures in high dimensions. Many functions have degenerate saddle points such that the first and second order derivatives cannot distinguish them with local optima. In this paper we use higher order derivatives to escape these saddle points: we design the first efficient algorithm guaranteed to converge to a third order local optimum (while existing techniques are at most second order). We also show that it is NP-hard to extend this further to finding fourth order local optima.",
"title": ""
},
{
"docid": "a19f4e5f36b04fed7937be1c90ce3581",
"text": "This paper describes a map-matching algorithm designed to support the navigational functions of a real-time vehicle performance and emissions monitoring system currently under development, and other transport telematics applications. The algorithm is used together with the outputs of an extended Kalman filter formulation for the integration of GPS and dead reckoning data, and a spatial digital database of the road network, to provide continuous, accurate and reliable vehicle location on a given road segment. This is irrespective of the constraints of the operational environment, thus alleviating outage and accuracy problems associated with the use of stand-alone location sensors. The map-matching algorithm has been tested using real field data and has been found to be superior to existing algorithms, particularly in how it performs at road intersections.",
"title": ""
},
{
"docid": "752d5329b38010a77975ad0ec2e5eaab",
"text": "Hemali Padalia, Pooja Moteriya, Yogesh Baravalia and Sumitra Chanda* 1 Phytochemical, Pharmacological and Microbiological Laboratory, Department of Biosciences (UGC-CAS), Saurashtra University, Rajkot-360005, Gujarat, India 2 Phytochemical, Pharmacological and Microbiological Laboratory, Department of Biochemistry, Saurashtra University, Rajkot-360005, Gujarat, India * Corresponding author: email: [email protected]",
"title": ""
},
{
"docid": "24dda2b2334810b375f7771685669177",
"text": "This paper presents a 64-times interleaved 2.6 GS/s 10b successive-approximation-register (SAR) ADC in 65 nm CMOS. The ADC combines interleaving hierarchy with an open-loop buffer array operated in feedforward-sampling and feedback-SAR mode. The sampling front-end consists of four interleaved T/Hs at 650 MS/s that are optimized for timing accuracy and sampling linearity, while the back-end consists of four ADC arrays, each consisting of 16 10b current-mode non-binary SAR ADCs. The interleaving hierarchy allows for many ADCs to be used per T/H and eliminates distortion stemming from open loop buffers interfacing between the front-end and back-end. Startup on-chip calibration deals with offset and gain mismatches as well as DAC linearity. Measurements show that the prototype ADC achieves an SNDR of 48.5 dB and a THD of less than 58 dB at Nyquist with an input signal of 1.4 . An estimated sampling clock skew spread of 400 fs is achieved by careful design and layout. Up to 4 GHz an SNR of more than 49 dB has been measured, enabled by the less than 110 fs rms clock jitter. The ADC consumes 480 mW from 1.2/1.3/1.6 V supplies and occupies an area of 5.1 mm.",
"title": ""
},
{
"docid": "1f27caaaeae8c82db6a677f66f2dee74",
"text": "State of the art visual SLAM systems have recently been presented which are capable of accurate, large-scale and real-time performance, but most of these require stereo vision. Important application areas in robotics and beyond open up if similar performance can be demonstrated using monocular vision, since a single camera will always be cheaper, more compact and easier to calibrate than a multi-camera rig. With high quality estimation, a single camera moving through a static scene of course effectively provides its own stereo geometry via frames distributed over time. However, a classic issue with monocular visual SLAM is that due to the purely projective nature of a single camera, motion estimates and map structure can only be recovered up to scale. Without the known inter-camera distance of a stereo rig to serve as an anchor, the scale of locally constructed map portions and the corresponding motion estimates is therefore liable to drift over time. In this paper we describe a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input. In particular, we present a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures. Especially, we describe the Lie group of similarity transformations and its relation to the corresponding Lie algebra. We also present in detail the system’s new image processing front-end which is able accurately to track hundreds of features per frame, and a filter-based approach for feature initialisation within keyframe-based SLAM. Our approach is proven via large-scale simulation and real-world experiments where a camera completes large looped trajectories.",
"title": ""
},
{
"docid": "2706e8112607f40de96ee00bcae1b911",
"text": "A resurgence in the use of medical herbs in the Western world, and the co-use of modern and traditional therapies is becoming more common. Thus there is the potential for both pharmacokinetic and pharmacodynamic herb-drug interactions. For example, systems such as the cytochrome P450 (CYP) may be particularly vulnerable to modulation by the multiple active constituents of herbs, as it is well known that the CYPs are subject to induction and inhibition by exposure to a wide variety of xenobiotics. Using in vitro, in silico, and in vivo approaches, many herbs and natural compounds isolated from herbs have been identified as substrates, inhibitors, and/or inducers of various CYP enzymes. For example, St. John's wort is a potent inducer of CYP3A4, which is mediated by activating the orphan pregnane X receptor. It also contains ingredients that inhibit CYP1A2, CYP2C9, CYP2C19, CYP2D6, and CYP3A4. Many other common medicinal herbs also exhibited inducing or inhibiting effects on the CYP system, with the latter being competitive, noncompetitive, or mechanism-based. It appears that the regulation of CYPs by herbal products complex, depending on the herb type, their administration dose and route, the target organ and species. Due to the difficulties in identifying the active constituents responsible for the modulation of CYP enzymes, prediction of herb-drug metabolic interactions is difficult. However, herb-CYP interactions may have important clinical and toxicological consequences. For example, induction of CYP3A4 by St. John's wort may partly provide an explanation for the enhanced plasma clearance of a number of drugs, such as cyclosporine and innadivir, which are known substrates of CYP3A4, although other mechanisms including modulation of gastric absorption and drug transporters cannot be ruled out. In contrast, many organosulfur compounds, such as diallyl sulfide from garlic, are potent inhibitors of CYP2E1; this may provide an explanation for garlic's chemoproventive effects, as many mutagens require activation by CYP2E1. Therefore, known or potential herb-CYP interactions exist, and further studies on their clinical and toxicological roles are warranted. Given that increasing numbers of people are exposed to a number of herbal preparations that contain many constituents with potential of CYP modulation, high-throughput screening assays should be developed to explore herb-CYP interactions.",
"title": ""
},
{
"docid": "fbbd24318caac8a8a2a63670f6a624cd",
"text": "We show that elliptic-curve cryptography implementations on mobile devices are vulnerable to electromagnetic and power side-channel attacks. We demonstrate full extraction of ECDSA secret signing keys from OpenSSL and CoreBitcoin running on iOS devices, and partial key leakage from OpenSSL running on Android and from iOS's CommonCrypto. These non-intrusive attacks use a simple magnetic probe placed in proximity to the device, or a power probe on the phone's USB cable. They use a bandwidth of merely a few hundred kHz, and can be performed cheaply using an audio card and an improvised magnetic probe.",
"title": ""
},
{
"docid": "4e69f2a69c1063e15b85350eeafc868d",
"text": "Autism spectrum disorders (ASD) are largely characterized by deficits in imitation, pragmatic language, theory of mind, and empathy. Previous research has suggested that a dysfunctional mirror neuron system may explain the pathology observed in ASD. Because EEG oscillations in the mu frequency (8-13 Hz) over sensorimotor cortex are thought to reflect mirror neuron activity, one method for testing the integrity of this system is to measure mu responsiveness to actual and observed movement. It has been established that mu power is reduced (mu suppression) in typically developing individuals both when they perform actions and when they observe others performing actions, reflecting an observation/execution system which may play a critical role in the ability to understand and imitate others' behaviors. This study investigated whether individuals with ASD show a dysfunction in this system, given their behavioral impairments in understanding and responding appropriately to others' behaviors. Mu wave suppression was measured in ten high-functioning individuals with ASD and ten age- and gender-matched control subjects while watching videos of (1) a moving hand, (2) a bouncing ball, and (3) visual noise, or (4) moving their own hand. Control subjects showed significant mu suppression to both self and observed hand movement. The ASD group showed significant mu suppression to self-performed hand movements but not to observed hand movements. These results support the hypothesis of a dysfunctional mirror neuron system in high-functioning individuals with ASD.",
"title": ""
},
{
"docid": "a52ea2c0d475b6ad37c73e89e06aedde",
"text": "BACKGROUND\nInnovations in mobile and electronic healthcare are revolutionizing the involvement of both doctors and patients in the modern healthcare system by extending the capabilities of physiological monitoring devices. Despite significant progress within the monitoring device industry, the widespread integration of this technology into medical practice remains limited. The purpose of this review is to summarize the developments and clinical utility of smart wearable body sensors.\n\n\nMETHODS\nWe reviewed the literature for connected device, sensor, trackers, telemonitoring, wireless technology and real time home tracking devices and their application for clinicians.\n\n\nRESULTS\nSmart wearable sensors are effective and reliable for preventative methods in many different facets of medicine such as, cardiopulmonary, vascular, endocrine, neurological function and rehabilitation medicine. These sensors have also been shown to be accurate and useful for perioperative monitoring and rehabilitation medicine.\n\n\nCONCLUSION\nAlthough these devices have been shown to be accurate and have clinical utility, they continue to be underutilized in the healthcare industry. Incorporating smart wearable sensors into routine care of patients could augment physician-patient relationships, increase the autonomy and involvement of patients in regards to their healthcare and will provide for novel remote monitoring techniques which will revolutionize healthcare management and spending.",
"title": ""
}
] |
scidocsrr
|
29b458239e26e48c5e79749589043607
|
A sentiment analysis model for hotel reviews based on supervised learning
|
[
{
"docid": "cd89079c74f5bb0218be67bf680b410f",
"text": "This paper illustrates a sentiment analysis approach to extract sentiments associated with polarities of positive or negative for specific subjects from a document, instead of classifying the whole document into positive or negative.The essential issues in sentiment analysis are to identify how sentiments are expressed in texts and whether the expressions indicate positive (favorable) or negative (unfavorable) opinions toward the subject. In order to improve the accuracy of the sentiment analysis, it is important to properly identify the semantic relationships between the sentiment expressions and the subject. By applying semantic analysis with a syntactic parser and sentiment lexicon, our prototype system achieved high precision (75-95%, depending on the data) in finding sentiments within Web pages and news articles.",
"title": ""
},
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
}
] |
[
{
"docid": "d3e409b074c4c26eb208b27b7b58a928",
"text": "The increase in concern for carbon emission and reduction in natural resources for conventional power generation, the renewable energy based generation such as Wind, Photovoltaic (PV), and Fuel cell has gained importance. Out of which the PV based generation has gained significance due to availability of abundant sunlight. As the Solar power conversion is a low efficient conversion process, accurate and reliable, modeling of solar cell is important. Due to the non-linear nature of diode based PV model, the accurate design of PV cell is a difficult task. A built-in model of PV cell is available in Simscape, Simelectronics library, Matlab. The equivalent circuit parameters have to be computed from data sheet and incorporated into the model. However it acts as a stiff source when implemented with a MPPT controller. Henceforth, to overcome this drawback, in this paper a two-diode model of PV cell is implemented in Matlab Simulink with reduced four required parameters along with similar configuration of the built-in model. This model allows incorporation of MPPT controller. The I-V and P-V characteristics of these two models are investigated under different insolation levels. A PV based generation system feeding a DC load is designed and investigated using these two models and further implemented with MPPT based on P&O technique.",
"title": ""
},
{
"docid": "74d7e52e2187ff2ac1fd4c3ef28e2c82",
"text": "This work is focused on processor allocation in shared-memory multiprocessor systems, where no knowledge of the application is available when applications are submitted. We perform the processor allocation taking into account the characteristics of the application measured at run-time. We want to demonstrate the importance of an accurate performance analysis and the criteria used to distribute the processors. With this aim, we present the SelfAnalyzer, an approach to dynamically analyzing the performance of applications (speedup, efficiency and execution time), and the Performance-Driven Processor Allocation (PDPA), a new scheduling policy that distributes processors considering both the global conditions of the system and the particular characteristics of running applications. This work also defends the importance of the interaction between the medium-term and the long-term scheduler to control the multiprogramming level in the case of the clairvoyant scheduling pol-icies1. We have implemented our proposal in an SGI Origin2000 with 64 processors and we have compared its performance with that of some scheduling policies proposed so far and with the native IRIX scheduling policy. Results show that the combination of the SelfAnalyzer+PDPA with the medium/long-term scheduling interaction outperforms the rest of the scheduling policies evaluated. The evaluation shows that in workloads where a simple equipartition performs well, the PDPA also performs well, and in extreme workloads where all the applications have a bad performance, our proposal can achieve a speedup of 3.9 with respect to an equipartition and 11.8 with respect to the native IRIX scheduling policy.",
"title": ""
},
{
"docid": "45a098c09a3803271f218fafd4d951cd",
"text": "Recent years have seen a tremendous increase in the demand for wireless bandwidth. To support this demand by innovative and resourceful use of technology, future communication systems will have to shift towards higher carrier frequencies. Due to the tight regulatory situation, frequencies in the atmospheric attenuation window around 300 GHz appear very attractive to facilitate an indoor, short range, ultra high speed THz communication system. In this paper, we investigate the influence of diffuse scattering at such high frequencies on the characteristics of the communication channel and its implications on the non-line-of-sight propagation path. The Kirchhoff approach is verified by an experimental study of diffuse scattering from randomly rough surfaces commonly encountered in indoor environments using a fiber-coupled terahertz time-domain spectroscopy system to perform angle- and frequency-dependent measurements. Furthermore, we integrate the Kirchhoff approach into a self-developed ray tracing algorithm to model the signal coverage of a typical office scenario.",
"title": ""
},
{
"docid": "7c057b63c525a03ad2f40f625b6157e3",
"text": "As the selection of products and services becomes profuse in the technology market, it is often the delighting user experience (UX) that differentiates a successful product from the competitors. Product development is no longer about implementing features and testing their usability, but understanding users' daily lives and evaluating if a product resonates with the in-depth user needs. Although UX is a widely adopted term in industry, the tools for evaluating UX in product development are still inadequate. Based on industrial case studies and the latest research on UX evaluation, this workshop forms a model for aligning the used UX evaluation methods to product development processes. The results can be used to advance the state of \"putting UX evaluation into practice\".",
"title": ""
},
{
"docid": "3bf252bcb0953016cc5a834d9d9325d3",
"text": "This paper proposes a digital phase leading filter current compensation (PLFCC) technique for a continuous conduction mode boost power factor correction to improve PF in high line voltage and light load conditions. The proposed technique provides a corrected average inductor current reference and utilizes an enhanced duty ratio feed-forward technique which can cancel the adverse effect of the phase leading currents caused by filter capacitors. Moreover, the proposed PLFCC technique also provides the switching dead-zone in nature so the switching loss can be reduced. Therefore, the proposed PLFCC can significantly improve power quality and can achieve a high efficiency in high line voltage and light load conditions. The principle and analysis of the proposed PLFCC are presented, and performance and feasibility are verified by experimental results from the universal input (90-260 VAC) and 750 W-400 V output laboratory prototype.",
"title": ""
},
{
"docid": "9648c6cbdd7a04c595b7ba3310f32980",
"text": "Increase in identity frauds, crimes, security there is growing need of fingerprint technology in civilian and law enforcement applications. Partial fingerprints are of great interest which are either found at crime scenes or resulted from improper scanning. These fingerprints are poor in quality and the number of features present depends on size of fingerprint. Due to the lack of features such as core and delta, general fingerprint matching algorithms do not perform well for partial fingerprint matching. By using combination of level1 and level 2 features accuracy of partial matching cannot be increased. Therefore, we utilize extended features in combination with other feature set. Efficacious fusion methods for coalesce of different modality systems perform better for these types of prints. In this paper, we propose a method for partial fingerprint matching using score level fusion of minutiae based radon transform and pores based LBP extraction. To deal with broken ridges and fragmentary information, radon transform is used to get local information around minutiae. Finally, we evaluate the performance by comparing Equal Error Rate (ERR) of proposed method and existing method and proposed method reduces the error rate to 1.84%.",
"title": ""
},
{
"docid": "dd5df9fa96f921bc55d65281e6b7437a",
"text": "AIM\nPoor eating habits among young adults are a public health concern. This survey examined the eating habits of undergraduate university students in Finland. We assessed students' dietary intake of a variety of food groups, their adherence to international dietary guidelines (whole sample and by gender), and the associations between importance of eating healthy and dietary guidelines adherence (whole sample and by gender).\n\n\nMETHODS\nDuring the 2013-2014 academic year, 1,189 undergraduate students enrolled at the University of Turku in southwestern Finland completed an online self-administered questionnaire. Students reported their eating habits of 12 food groups, the number of daily servings of fruits/vegetables they consume and how important it is for them to eat healthy. For dietary adherence recommendations, we employed WHO guidelines. Chi-square statistic tested the differences in dietary guidelines adherence between males and females and also the associations between the gradients of importance of healthy eating and the self reported eating habits for each of the food groups, for the whole sample and by gender.\n\n\nRESULTS\nWe observed high levels of dietary adherence (>70%) for most of the 'unhealthy food' items (cake/cookies, snacks, fast food/canned food, and lemonade/soft drinks), and moderate adherence for most of the 'healthy food' items (>50%) (dairy/dairy products, fruit/vegetables servings/day, fresh fruit, salads/raw vegetables and cereal/cereal products). Fish/seafood, meat/sausage products and cooked vegetables had levels <50% for adherence to the guidelines. Women had better adherence for meat/sausage products, fast food/canned food and for most 'healthy food' items (p≤0.001), whereas men had better adherence for sweets (difference=12.8%, p≤0.001), lemonade/soft drinks (difference=16.7%, p≤0.001) and fish/seafood (difference=6.6%, p=0.040) compared to women. Most students considered important to eat healthy (78.8%). The importance of eating healthy was significantly associated with adherence for all food groups besides sweets and cake/cookies. These associations remained significant for women but some of them not for men (cereal/cereal products, snacks and sweets).\n\n\nCONCLUSIONS\nThe results suggest high adherence to the guidelines mainly for 'unhealthy food' groups, and moderate adherence for healthier food groups. There was also accordance between regarding eating healthy as important and actually eating healthy. However, there are improvements to be considered for specific food groups, as well as gender differences when implementing public health strategies related to food intake.",
"title": ""
},
{
"docid": "59291cb1c13ab274f06b619698784e23",
"text": "We present a new class of Byzantine-tolerant State Machine Replication protocols for asynchronous environments that we term Byzantine Chain Replication. We demonstrate two implementations that present different trade-offs between performance and security, and compare these with related work. Leveraging an external reconfiguration service, these protocols are not based on Byzantine consensus, do not require majoritybased quorums during normal operation, and the set of replicas is easy to reconfigure. One of the implementations is instantiated with t+ 1 replicas to tolerate t failures and is useful in situations where perimeter security makes malicious attacks unlikely. Applied to in-memory BerkeleyDB replication, it supports 20,000 transactions per second while a fully Byzantine implementation supports 12,000 transactions per second—about 70% of the throughput of a non-replicated database.",
"title": ""
},
{
"docid": "e306a1da0e4ca73b43bfc8284d6c7904",
"text": "In terms of mounting a computer on the body, the computer's weight, size, shape, placement and method of attachment can elicit a number of effects. Inappropriate design may mean that the wearer is unable to perform specific tasks or achieve goals. Excessive stress on the body may result in perceptions of discomfort, which may in turn affect task performance, but ultimately raises issues of health and safety. This paper proposes a methodology for assessing the affects of wearing a computer in terms of physiological energy expenditure, the biomechanical effects due to changes in movement patterns, posture and perceptions of localised pain and discomfort due to musculoskeletal loading, and perceptions of well- being through comfort assessment. From ratings of these effects the paper proposes 5 levels to determine the wearability of a computer.",
"title": ""
},
{
"docid": "084b83aed850aca07bed298de455c110",
"text": "Leveraging built-in cameras on smartphones and tablets, face authentication provides an attractive alternative of legacy passwords due to its memory-less authentication process. However, it has an intrinsic vulnerability against the media-based facial forgery (MFF) where adversaries use photos/videos containing victims' faces to circumvent face authentication systems. In this paper, we propose FaceLive, a practical and robust liveness detection mechanism to strengthen the face authentication on mobile devices in fighting the MFF-based attacks. FaceLive detects the MFF-based attacks by measuring the consistency between device movement data from the inertial sensors and the head pose changes from the facial video captured by built-in camera. FaceLive is practical in the sense that it does not require any additional hardware but a generic front-facing camera, an accelerometer, and a gyroscope, which are pervasively available on today's mobile devices. FaceLive is robust to complex lighting conditions, which may introduce illuminations and lead to low accuracy in detecting important facial landmarks; it is also robust to a range of cumulative errors in detecting head pose changes during face authentication.",
"title": ""
},
{
"docid": "9647b3278ee0ad7f8cb1c40c2dbe1331",
"text": "I want to describe an idea which is related to other things that were suggested in the colloquium, though my approach will be quite different. The basic theme of these suggestions have been to try to get rid of the continuum and build up physical theory from discreteness. The most obvious place in which the continuum comes into physics is the structure of space-time. But, apparently independently of this, there is also another place in which the continuum is built into present physical theory. This is in quantum theory, where there is the superposition law: if you have two states, you’re supposed to be able to form any linear combination of these two states. These are complex linear combinations, so again you have a continuum coming in—namely the two-dimensional complex continuum— in a fundamental way. My basic idea is to try and build up both space-time and quantum mechanics simultaneously—from combinatorial principles—but not (at least in the first instance) to try and change physical theory. In the first place it is a reformulation, though ultimately, perhaps, there will be some changes. Different things will suggest themselves in a reformulated theory, than in the original formulation. One scarcely wants to take every concept in existing theory and try to make it combinatorial: there are too many things which look continuous in existing theory. And to try to eliminate the continuum by approximating it by some discrete structure would be to change the theory. The idea, instead, is to concentrate only on things which, in fact, are discrete in existing theory and try and use them as primary concepts—then to build up other things using these discrete primary concepts as the basic building blocks. Continuous concepts could emerge in a limit, when we take more and more complicated systems. The most obvious physical concept that one has to start with, where quantum mechanics says something is discrete, and which is connected with the structure of space-time in a very intimate way, is in angular momentum. The idea here, then, is to start with the concept of angular momentum— here one has a discrete spectrum—and use the rules for combining angular",
"title": ""
},
{
"docid": "f9c8209fcecbbed99aa29761dffc8e25",
"text": "ImageNet is a large-scale database of object classes with millions of images. Unfortunately only a small fraction of them is manually annotated with bounding-boxes. This prevents useful developments, such as learning reliable object detectors for thousands of classes. In this paper we propose to automatically populate ImageNet with many more bounding-boxes, by leveraging existing manual annotations. The key idea is to localize objects of a target class for which annotations are not available, by transferring knowledge from related source classes with available annotations. We distinguish two kinds of source classes: ancestors and siblings. Each source provides knowledge about the plausible location, appearance and context of the target objects, which induces a probability distribution over windows in images of the target class. We learn to combine these distributions so as to maximize the location accuracy of the most probable window. Finally, we employ the combined distribution in a procedure to jointly localize objects in all images of the target class. Through experiments on 0.5 million images from 219 classes we show that our technique (i) annotates a wide range of classes with bounding-boxes; (ii) effectively exploits the hierarchical structure of ImageNet, since all sources and types of knowledge we propose contribute to the results; (iii) scales efficiently.",
"title": ""
},
{
"docid": "f910efe3b9bf7450d29c582e83ba0557",
"text": "Based on the intuition that frequent patterns can be used to predict the next few items that users would want to access, sequential pattern mining-based next-items recommendation algorithms have performed well in empirical studies including online product recommendation. However, most current methods do not perform personalized sequential pattern mining, and this seriously limits their capability to recommend the best next-items to each specific target user. In this paper, we introduce a personalized sequential pattern mining-based recommendation framework. Using a novel Competence Score measure, the proposed framework effectively learns user-specific sequence importance knowledge, and exploits this additional knowledge for accurate personalized recommendation. Experimental results on real-world datasets demonstrate that the proposed framework effectively improves the efficiency for mining sequential patterns, increases the user-relevance of the identified frequent patterns, and most importantly, generates significantly more accurate next-items recommendation for the target users.",
"title": ""
},
{
"docid": "047112c682f64fc6a272a7e80d5f1a1b",
"text": "In this paper, we study an important yet largely under-explored setting of graph embedding, i.e., embedding communities instead of each individual nodes. We find that community embedding is not only useful for community-level applications such as graph visualization, but also beneficial to both community detection and node classification. To learn such embedding, our insight hinges upon a closed loop among community embedding, community detection and node embedding. On the one hand, node embedding can help improve community detection, which outputs good communities for fitting better community embedding. On the other hand, community embedding can be used to optimize the node embedding by introducing a community-aware high-order proximity. Guided by this insight, we propose a novel community embedding framework that jointly solves the three tasks together. We evaluate such a framework on multiple real-world datasets, and show that it improves graph visualization and outperforms state-of-the-art baselines in various application tasks, e.g., community detection and node classification.",
"title": ""
},
{
"docid": "87e61b20768a9f8397031798295874f8",
"text": "Arterial pressure is a cyclic phenomenon characterized by a pressure wave oscillating around the mean blood pressure, from diastolic to systolic blood pressure, defining the pulse pressure. Aortic input impedance is a measure of the opposition of the circulation to an oscillatory flow input (stroke volume generated by heart work). Aortic input impedance integrates factors opposing LV ejection, such as peripheral resistance, viscoelastic properties and dimensions of the large central arteries, and the intensity and timing of the pressure wave reflections, associated with the opposition to LV ejection influenced by inertial forces. The two most frequently used methods of arterial stiffness are measurement of PWV and central (aortic or common carotid artery) pulse wave analysis, recorded directly at the carotid artery or indirectly in the ascending aorta from radial artery pressure curve. The arterial system is heterogenous and characterized by the existence of a stiffness gradient with progressive stiffness increase (PWV) from ascending aorta and large elastic proximal arteries to the peripheral muscular conduit arteries. Analysis of aortic or carotid pressure waveform and amplitude concerns the effect of reflected waves on pressure shape and amplitude, estimated in absolute terms, augmented pressure in millimetre of mercury, or, in relative terms, 'augmentation index' (Aix in percentage of pulse pressure). Finally, if the aortic PWV has the highest predictive value for prognosis, the aortic or central artery pressure waveform should be recorded and analysed in parallel with the measure of PWV to allow a deeper analysis of arterial haemodynamics.",
"title": ""
},
{
"docid": "bfae60b46b97cf2491d6b1136c60f6a6",
"text": "Educational data mining concerns with developing methods for discovering knowledge from data that come from educational domain. In this paper we used educational data mining to improve graduate students’ performance, and overcome the problem of low grades of graduate students. In our case study we try to extract useful knowledge from graduate students data collected from the college of Science and Technology – Khanyounis. The data include fifteen years period [1993-2007]. After preprocessing the data, we applied data mining techniques to discover association, classification, clustering and outlier detection rules. In each of these four tasks, we present the extracted knowledge and describe its importance in educational domain.",
"title": ""
},
{
"docid": "cbc6986bf415292292b7008ae4d13351",
"text": "In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.",
"title": ""
},
{
"docid": "4d1ad34588e76f80eef028c55384b39a",
"text": "Switched reluctance motor (SRM) is attractive for the driving motor of electric vehicles (EVs) and hybrid electric vehicles (HEVs) from the view point which does not use rare-earth materials. However, special drive circuit, which is different from general three-phase inverter used for permanent magnet synchronous motor and induction motor, is required in order to drive SRM. By the way, in some of commercial HEVs, boost chopper combined with three-phase inverter is used in order to apply high voltage to motor windings at high-speed range. In this system, the size of the additional reactor, which is required in boost chopper, is a problem. Then, this paper proposes a novel drive circuit with voltage boost function without additional reactor for SRM. In addition, control scheme and controller configuration for the proposed circuit are described. Moreover, the effectiveness of the proposed circuit is verified by simulation and experimental results.",
"title": ""
},
{
"docid": "727a53dad95300ee9749c13858796077",
"text": "Device to device (D2D) communication underlaying LTE can be used to distribute traffic loads of eNBs. However, a conventional D2D link is controlled by an eNB, and it still remains burdens to the eNB. We propose a completely distributed power allocation method for D2D communication underlaying LTE using deep learning. In the proposed scheme, a D2D transmitter can decide the transmit power without any help from other nodes, such as an eNB or another D2D device. Also, the power set, which is delivered from each D2D node independently, can optimize the overall cell throughput. We suggest a distirbuted deep learning architecture in which the devices are trained as a group, but operate independently. The deep learning can optimize total cell throughput while keeping constraints such as interference to eNB. The proposed scheme, which is implemented model using Tensorflow, can provide same throughput with the conventional method even it operates completely on distributed manner.",
"title": ""
},
{
"docid": "47ee81ef9fb8a9bc792ee6edc9a2b503",
"text": "Current image captioning approaches generate descriptions which lack specific information, such as named entities that are involved in the images. In this paper we propose a new task which aims to generate informative image captions, given images and hashtags as input. We propose a simple but effective approach to tackle this problem. We first train a convolutional neural networks long short term memory networks (CNN-LSTM) model to generate a template caption based on the input image. Then we use a knowledge graph based collective inference algorithm to fill in the template with specific named entities retrieved via the hashtags. Experiments on a new benchmark dataset collected from Flickr show that our model generates news-style image descriptions with much richer information. Our model outperforms unimodal baselines significantly with various evaluation metrics.",
"title": ""
}
] |
scidocsrr
|
22e3f411d852cef6d1d7ec72aabbe735
|
Power-aware routing based on the energy drain rate for mobile ad hoc networks
|
[
{
"docid": "7785c16b3d0515057c8a0ec0ed55b5de",
"text": "Most ad hoc mobile devices today operate on batteries. Hence, power consumption becomes an important issue. To maximize the lifetime of ad hoc mobile networks, the power consumption rate of each node must be evenly distributed, and the overall transmission power for each connection request must be minimized. These two objectives cannot be satisfied simultaneously by employing routing algorithms proposed in previous work. In this article we present a new power-aware routing protocol to satisfy these two constraints simultaneously; we also compare the performance of different types of power-related routing algorithms via simulation. Simulation results confirm the need to strike a balance in attaining service availability performance of the whole network vs. the lifetime of ad hoc mobile devices.",
"title": ""
},
{
"docid": "bbdb676a2a813d29cd78facebc38a9b8",
"text": "In this paper we develop a new multiaccess protocol for ad hoc radio networks. The protocol is based on the original MACA protocol with the adition of a separate signalling channel. The unique feature of our protocol is that it conserves battery power at nodes by intelligently powering off nodes that are not actively transmitting or receiving packets. The manner in which nodes power themselves off does not influence the delay or throughput characteristics of our protocol. We illustrate the power conserving behavior of PAMAS via extensive simulations performed over ad hoc networks containing 10-20 nodes. Our results indicate that power savings of between 10% and 70% are attainable in most systems. Finally, we discuss how the idea of power awareness can be built into other multiaccess protocols as well.",
"title": ""
}
] |
[
{
"docid": "ef3bfb8b04eea94724e0124b0cfe723e",
"text": "Generative adversarial networks (GANs) have demonstrated to be successful at generating realistic real-world images. In this paper we compare various GAN techniques, both supervised and unsupervised. The effects on training stability of different objective functions are compared. We add an encoder to the network, making it possible to encode images to the latent space of the GAN. The generator, discriminator and encoder are parameterized by deep convolutional neural networks. For the discriminator network we experimented with using the novel Capsule Network, a state-of-the-art technique for detecting global features in images. Experiments are performed using a digit and face dataset, with various visualizations illustrating the results. The results show that using the encoder network it is possible to reconstruct images. With the conditional GAN we can alter visual attributes of generated or encoded images. The experiments with the Capsule Network as discriminator result in generated images of a lower quality, compared to a standard convolutional neural network.",
"title": ""
},
{
"docid": "63934cfd6042d8bb2227f4e83b005cc2",
"text": "To support effective exploration, it is often stated that interactive visualizations should provide rapid response times. However, the effects of interactive latency on the process and outcomes of exploratory visual analysis have not been systematically studied. We present an experiment measuring user behavior and knowledge discovery with interactive visualizations under varying latency conditions. We observe that an additional delay of 500ms incurs significant costs, decreasing user activity and data set coverage. Analyzing verbal data from think-aloud protocols, we find that increased latency reduces the rate at which users make observations, draw generalizations and generate hypotheses. Moreover, we note interaction effects in which initial exposure to higher latencies leads to subsequently reduced performance in a low-latency setting. Overall, increased latency causes users to shift exploration strategy, in turn affecting performance. We discuss how these results can inform the design of interactive analysis tools.",
"title": ""
},
{
"docid": "5229fb13c66ca8a2b079f8fe46bb9848",
"text": "We put forth a lookup-table-based modular reduction method which partitions the binary string of an integer to be reduced into blocks according to its runs. Its complexity depends on the amount of runs in the binary string. We show that the new reduction is almost twice as fast as the popular Barrett’s reduction, and provide a thorough complexity analysis of the method.",
"title": ""
},
{
"docid": "f195e7f1018e1e1a6836c9d110ce1de4",
"text": "Motivated by the goal of obtaining more-anthropomorphic walking in bipedal robots, this paper considers a hybrid model of a 3D hipped biped with feet and locking knees. The main observation of this paper is that functional Routhian Reduction can be used to extend two-dimensional walking to three dimensions—even in the presence of periods of underactuation—by decoupling the sagittal and coronal dynamics of the 3D biped. Specifically, we assume the existence of a control law that yields stable walking for the 2D sagittal component of the 3D biped. The main result of the paper is that utilizing this controller together with “reduction control laws” yields walking in three dimensions. This result is supported through simulation.",
"title": ""
},
{
"docid": "0c1cd807339481f3a0b6da1fbe96950c",
"text": "Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space. We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We sample these models to generate an unbounded number of runnable training programs. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code. We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27x. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performance by 4.30x.",
"title": ""
},
{
"docid": "76def4ca02a25669610811881531e875",
"text": "The design and implementation of a novel frequency synthesizer based on low phase-noise digital dividers and a direct digital synthesizer is presented. The synthesis produces two low noise accurate tunable signals at 10 and 100 MHz. We report the measured residual phase noise and frequency stability of the syn thesizer and estimate the total frequency stability, which can be expected from the synthesizer seeded with a signal near 11.2 GHz from an ultra-stable cryocooled sapphire oscillator (cryoCSO). The synthesizer residual single-sideband phase noise, at 1-Hz offset, on 10and 100-MHz signals was -135 and -130 dBc/Hz, respectively. The frequency stability contributions of these two sig nals was σ<sub>y</sub> = 9 × 10<sup>-15</sup> and σ<sub>y</sub> = 2.2 × 10<sup>-15</sup>, respectively, at 1-s integration time. The Allan deviation of the total fractional frequency noise on the 10- and 100-MHz signals derived from the synthesizer with the cry oCSO may be estimated, respectively, as σ<sub>y</sub> ≈ 3.6 × 10<sup>-15</sup> τ<sup>-1/2</sup> + 4 × 10<sup>-16</sup> and σ<sub>y</sub> ≈ s 5.2 × 10<sup>-2</sup> × 10<sup>-16</sup> τ<sup>-1/2</sup> + 3 × 10<sup>-16</sup>, respectively, for 1 ≤ τ <; 10<sup>4</sup>s. We also calculate the coherence function (a figure of merit for very long baseline interferometry in radio astronomy) for observation frequencies of 100, 230, and 345 GHz, when using the cry oCSO and a hydrogen maser. The results show that the cryoCSO offers a significant advantage at frequencies above 100 GHz.",
"title": ""
},
{
"docid": "5a85c72c5b9898b010f047ee99dba133",
"text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.",
"title": ""
},
{
"docid": "8618b407f851f0806920f6e28fdefe3f",
"text": "The explosive growth of Internet applications and content, during the last decade, has revealed an increasing need for information filtering and recommendation. Most research in the area of recommendation systems has focused on designing and implementing efficient algorithms that provide accurate recommendations. However, the selection of appropriate recommendation content and the presentation of information are equally important in creating successful recommender applications. This paper addresses issues related to the presentation of recommendations in the movies domain. The current work reviews previous research approaches and popular recommender systems, and focuses on user persuasion and satisfaction. In our experiments, we compare different presentation methods in terms of recommendations’ organization in a list (i.e. top N-items list and structured overview) and recommendation modality (i.e. simple text, combination of text and image, and combination of text and video). The most efficient presentation methods, regarding user persuasion and satisfaction, proved to be the “structured overview” and the “text and video” interfaces, while a strong positive correlation was also found between user satisfaction and persuasion in all experimental conditions.",
"title": ""
},
{
"docid": "aa58cb2b2621da6260aeb203af1bd6f1",
"text": "Aspect-based opinion mining from online reviews has attracted a lot of attention recently. The main goal of all of the proposed methods is extracting aspects and/or estimating aspect ratings. Recent works, which are often based on Latent Dirichlet Allocation (LDA), consider both tasks simultaneously. These models are normally trained at the item level, i.e., a model is learned for each item separately. Learning a model per item is fine when the item has been reviewed extensively and has enough training data. However, in real-life data sets such as those from Epinions.com and Amazon.com more than 90% of items have less than 10 reviews, so-called cold start items. State-of-the-art LDA models for aspect-based opinion mining are trained at the item level and therefore perform poorly for cold start items due to the lack of sufficient training data. In this paper, we propose a probabilistic graphical model based on LDA, called Factorized LDA (FLDA), to address the cold start problem. The underlying assumption of FLDA is that aspects and ratings of a review are influenced not only by the item but also by the reviewer. It further assumes that both items and reviewers can be modeled by a set of latent factors which represent their aspect and rating distributions. Different from state-of-the-art LDA models, FLDA is trained at the category level and learns the latent factors using the reviews of all the items of a category, in particular the non cold start items, and uses them as prior for cold start items. Our experiments on three real-life data sets demonstrate the improved effectiveness of the FLDA model in terms of likelihood of the held-out test set. We also evaluate the accuracy of FLDA based on two application-oriented measures.",
"title": ""
},
{
"docid": "3e7e4b5c2a73837ac5fa111a6dc71778",
"text": "Merging the best features of RBAC and attribute-based systems can provide effective access control for distributed and rapidly changing applications.",
"title": ""
},
{
"docid": "edd6d9843c8c24497efa336d1a26be9d",
"text": "Alzheimer's disease (AD) can be diagnosed with a considerable degree of accuracy. In some centers, clinical diagnosis predicts the autopsy diagnosis with 90% certainty in series reported from academic centers. The characteristic histopathologic changes at autopsy include neurofibrillary tangles, neuritic plaques, neuronal loss, and amyloid angiopathy. Mutations on chromosomes 21, 14, and 1 cause familial AD. Risk factors for AD include advanced age, lower intelligence, small head size, and history of head trauma; female gender may confer additional risks. Susceptibility genes do not cause the disease by themselves but, in combination with other genes or epigenetic factors, modulate the age of onset and increase the probability of developing AD. Among several putative susceptibility genes (on chromosomes 19, 12, and 6), the role of apolipoprotein E (ApoE) on chromosome 19 has been repeatedly confirmed. Protective factors include ApoE-2 genotype, history of estrogen replacement therapy in postmenopausal women, higher educational level, and history of use of nonsteroidal anti-inflammatory agents. The most proximal brain events associated with the clinical expression of dementia are progressive neuronal dysfunction and loss of neurons in specific regions of the brain. Although the cascade of antecedent events leading to the final common path of neurodegeneration must be determined in greater detail, the accumulation of stable amyloid is increasingly widely accepted as a central pathogenetic event. All mutations known to cause AD increase the production of beta-amyloid peptide. This protein is derived from amyloid precursor protein and, when aggregated in a beta-pleated sheet configuration, is neurotoxic and forms the core of neuritic plaques. Nerve cell loss in selected nuclei leads to neurochemical deficiencies, and the combination of neuronal loss and neurotransmitter deficits leads to the appearance of the dementia syndrome. The destructive aspects include neurochemical deficits that disrupt cell-to-cell communications, abnormal synthesis and accumulation of cytoskeletal proteins (e.g., tau), loss of synapses, pruning of dendrites, damage through oxidative metabolism, and cell death. The concepts of cognitive reserve and symptom thresholds may explain the effects of education, intelligence, and brain size on the occurrence and timing of AD symptoms. Advances in understanding the pathogenetic cascade of events that characterize AD provide a framework for early detection and therapeutic interventions, including transmitter replacement therapies, antioxidants, anti-inflammatory agents, estrogens, nerve growth factor, and drugs that prevent amyloid formation in the brain.",
"title": ""
},
{
"docid": "0182e6dcf7c8ec981886dfa2586a0d5d",
"text": "MOTIVATION\nMetabolomics is a post genomic technology which seeks to provide a comprehensive profile of all the metabolites present in a biological sample. This complements the mRNA profiles provided by microarrays, and the protein profiles provided by proteomics. To test the power of metabolome analysis we selected the problem of discrimating between related genotypes of Arabidopsis. Specifically, the problem tackled was to discrimate between two background genotypes (Col0 and C24) and, more significantly, the offspring produced by the crossbreeding of these two lines, the progeny (whose genotypes would differ only in their maternally inherited mitichondia and chloroplasts).\n\n\nOVERVIEW\nA gas chromotography--mass spectrometry (GCMS) profiling protocol was used to identify 433 metabolites in the samples. The metabolomic profiles were compared using descriptive statistics which indicated that key primary metabolites vary more than other metabolites. We then applied neural networks to discriminate between the genotypes. This showed clearly that the two background lines can be discrimated between each other and their progeny, and indicated that the two progeny lines can also be discriminated. We applied Euclidean hierarchical and Principal Component Analysis (PCA) to help understand the basis of genotype discrimination. PCA indicated that malic acid and citrate are the two most important metabolites for discriminating between the background lines, and glucose and fructose are two most important metabolites for discriminating between the crosses. These results are consistant with genotype differences in mitochondia and chloroplasts.",
"title": ""
},
{
"docid": "8fb37cad9ad964598ed718f0c32eaff1",
"text": "A planar W-band monopulse antenna array is designed based on the substrate integrated waveguide (SIW) technology. The sum-difference comparator, 16-way divider and 32 × 32 slot array antenna are all integrated on a single dielectric substrate in the compact layout through the low-cost PCB process. Such a substrate integrated monopulse array is able to operate over 93 ~ 96 GHz with narrow-beam and high-gain. The maximal gain is measured to be 25.8 dBi, while the maximal null-depth is measured to be - 43.7 dB. This SIW monopulse antenna not only has advantages of low-cost, light, easy-fabrication, etc., but also has good performance validated by measurements. It presents an excellent candidate for W-band directional-finding systems.",
"title": ""
},
{
"docid": "fdb0009b962254761541eb08f556fa0e",
"text": "Nonionic surfactants are widely used in the development of protein pharmaceuticals. However, the low level of residual peroxides in surfactants can potentially affect the stability of oxidation-sensitive proteins. In this report, we examined the peroxide formation in polysorbate 80 under a variety of storage conditions and tested the potential of peroxides in polysorbate 80 to oxidize a model protein, IL-2 mutein. For the first time, we demonstrated that peroxides can be easily generated in neat polysorbate 80 in the presence of air during incubation at elevated temperatures. Polysorbate 80 in aqueous solution exhibited a faster rate of peroxide formation and a greater amount of peroxides during incubation, which is further promoted/catalyzed by light. Peroxide formation can be greatly inhibited by preventing any contact with air/oxygen during storage. IL-2 mutein can be easily oxidized both in liquid and solid states. A lower level of peroxides in polysorbate 80 did not change the rate of IL-2 mutein oxidation in liquid state but significantly accelerated its oxidation in solid state under air. A higher level of peroxides in polysorbate 80 caused a significant increase in IL-2 mutein oxidation both in liquid and solid states, and glutathione can significantly inhibit the peroxide-induced oxidation of IL-2 mutein in a lyophilized formulation. In addition, a higher level of peroxides in polysorbate 80 caused immediate IL-2 mutein oxidation during annealing in lyophilization, suggesting that implementation of an annealing step needs to be carefully evaluated in the development of a lyophilization process for oxidation-sensitive proteins in the presence of polysorbate.",
"title": ""
},
{
"docid": "c589dd4a3da018fbc62d69e2d7f56e88",
"text": "More than 520 soil samples were surveyed for species of the mycoparasitic zygomycete genus Syncephalis using a culture-based approach. These fungi are relatively common in soil using the optimal conditions for growing both the host and parasite. Five species obtained in dual culture are unknown to science and are described here: (i) S. digitata with sporangiophores short, merosporangia separate at the apices, simple, 3-5 spored; (ii) S. floridana, which forms galls in the host and has sporangiophores up to 170 µm long with unbranched merosporangia that contain 2-4 spores; (iii) S. pseudoplumigaleta, with an abrupt apical bend in the sporophore; (iv) S. pyriformis with fertile vesicles that are long-pyriform; and (v) S. unispora with unispored merosporangia. To facilitate future molecular comparisons between species of Syncephalis and to allow identification of these fungi from environmental sampling datasets, we used Syncephalis-specific PCR primers to generate internal transcribed spacer (ITS) sequences for all five new species.",
"title": ""
},
{
"docid": "9b44cee4e65922bb07682baf0d395730",
"text": "Zero-shot learning has gained popularity due to its potential to scale recognition models without requiring additional training data. This is usually achieved by associating categories with their semantic information like attributes. However, we believe that the potential offered by this paradigm is not yet fully exploited. In this work, we propose to utilize the structure of the space spanned by the attributes using a set of relations. We devise objective functions to preserve these relations in the embedding space, thereby inducing semanticity to the embedding space. Through extensive experimental evaluation on five benchmark datasets, we demonstrate that inducing semanticity to the embedding space is beneficial for zero-shot learning. The proposed approach outperforms the state-of-the-art on the standard zero-shot setting as well as the more realistic generalized zero-shot setting. We also demonstrate how the proposed approach can be useful for making approximate semantic inferences about an image belonging to a category for which attribute information is not available.",
"title": ""
},
{
"docid": "0e2d6ebfade09beb448e9c538dadd015",
"text": "Matching incomplete or partial fingerprints continues to be an important challenge today, despite the advances made in fingerprint identification techniques. While the introduction of compact silicon chip-based sensors that capture only part of the fingerprint has made this problem important from a commercial perspective, there is also considerable interest in processing partial and latent fingerprints obtained at crime scenes. When the partial print does not include structures such as core and delta, common matching methods based on alignment of singular structures fail. We present an approach that uses localized secondary features derived from relative minutiae information. A flow network-based matching technique is introduced to obtain one-to-one correspondence of secondary features. Our method balances the tradeoffs between maximizing the number of matches and minimizing total feature distance between query and reference fingerprints. A two-hidden-layer fully connected neural network is trained to generate the final similarity score based on minutiae matched in the overlapping areas. Since the minutia-based fingerprint representation is an ANSI-NIST standard [American National Standards Institute, New York, 1993], our approach has the advantage of being directly applicable to existing databases. We present results of testing on FVC2002’s DB1 and DB2 databases. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6386c0ef0d7cc5c33e379d9c4c2ca019",
"text": "BACKGROUND\nEven after negative sentinel lymph node biopsy (SLNB) for primary melanoma, patients who develop in-transit (IT) melanoma or local recurrences (LR) can have subclinical regional lymph node involvement.\n\n\nSTUDY DESIGN\nA prospective database identified 33 patients with IT melanoma/LR who underwent technetium 99m sulfur colloid lymphoscintigraphy alone (n = 15) or in conjunction with lymphazurin dye (n = 18) administered only if the IT melanoma/LR was concurrently excised.\n\n\nRESULTS\nSeventy-nine percent (26 of 33) of patients undergoing SLNB in this study had earlier removal of lymph nodes in the same lymph node basin as the expected drainage of the IT melanoma or LR at the time of diagnosis of their primary melanoma. Lymphoscintography at time of presentation with IT melanoma/LR was successful in 94% (31 of 33) cases, and at least 1 sentinel lymph node was found intraoperatively in 97% (30 of 31) cases. The SLNB was positive in 33% (10 of 30) of these cases. Completion lymph node dissection was performed in 90% (9 of 10) of patients. Nine patients with negative SLNB and IT melanoma underwent regional chemotherapy. Patients in this study with a positive sentinel lymph node at the time the IT/LR was mapped had a considerably shorter time to development of distant metastatic disease compared with those with negative sentinel lymph nodes.\n\n\nCONCLUSIONS\nIn this study, we demonstrate the technical feasibility and clinical use of repeat SLNB for recurrent melanoma. Performing SLNB cannot only optimize local, regional, and systemic treatment strategies for patients with LR or IT melanoma, but also appears to provide important prognostic information.",
"title": ""
},
{
"docid": "a741a386cdbaf977468782c1971c8d86",
"text": "There is a trend that, virtually everyone, ranging from big Web companies to traditional enterprisers to physical science researchers to social scientists, is either already experiencing or anticipating unprecedented growth in the amount of data available in their world, as well as new opportunities and great untapped value. This paper reviews big data challenges from a data management respective. In particular, we discuss big data diversity, big data reduction, big data integration and cleaning, big data indexing and query, and finally big data analysis and mining. Our survey gives a brief overview about big-data-oriented research and problems.",
"title": ""
},
{
"docid": "dc5e69ca604d7fde242876d5464fb045",
"text": "We propose a general Convolutional Neural Network (CNN) encoder model for machine translation that fits within in the framework of Encoder-Decoder models proposed by Cho, et. al. [1]. A CNN takes as input a sentence in the source language, performs multiple convolution and pooling operations, and uses a fully connected layer to produce a fixed-length encoding of the sentence as input to a Recurrent Neural Network decoder (using GRUs or LSTMs). The decoder, encoder, and word embeddings are jointly trained to maximize the conditional probability of the target sentence given the source sentence. Many variations on the basic model are possible and can improve the performance of the model.",
"title": ""
}
] |
scidocsrr
|
97a95b08d96e23560c189eb9e2696920
|
Missing Modality Transfer Learning via Latent Low-Rank Constraint
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "50c3e7855f8a654571a62a094a86c4eb",
"text": "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.",
"title": ""
}
] |
[
{
"docid": "a00ac4cefbb432ffcc6535dd8fd56880",
"text": "Mobile activity recognition focuses on inferring current user activities by leveraging sensory data available on today's sensor rich mobile phones. Supervised learning with static models has been applied pervasively for mobile activity recognition. In this paper, we propose a novel phone-based dynamic recognition framework with evolving data streams for activity recognition. The novel framework incorporates incremental and active learning for real-time recognition and adaptation in streaming settings. While stream evolves, we refine, enhance and personalise the learning model in order to accommodate the natural drift in a given data stream. Extensive experimental results using real activity recognition data have evidenced that the novel dynamic approach shows improved performance of recognising activities especially across different users. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e50b074abe37cc8caec8e3922347e0d9",
"text": "Subjectivity and sentiment analysis (SSA) has recently gained considerable attention, but most of the resources and systems built so far are tailored to English and other Indo-European languages. The need for designing systems for other languages is increasing, especially as blogging and micro-blogging websites become popular throughout the world. This paper surveys different techniques for SSA for Arabic. After a brief synopsis about Arabic, we describe the main existing techniques and test corpora for Arabic SSA that have been introduced in the literature.",
"title": ""
},
{
"docid": "fd455a51a5f96251b31db5e6eae34ecc",
"text": "As an infrastructural and productive industry, tourism is very important in modern economy and includes different scopes and functions. If it is developed appropriately, cultural relations and economic development of countries will be extended and provided. Web development as an applied tool in the internet plays a very determining role in tourism success and proper exploitation of it can pave the way for more development and success of this industry. On the other hand, the amount of data in the current world has been increased and analysis of large sets of data that is referred to as big data has been converted into a strategic approach to enhance competition and establish new methods for development, growth, innovation, and enhancement of the number of customers. Today, big data is one of the important issues of information management in digital age and one of the main opportunities in tourism industry for optimal exploitation of maximum information. Big data can shape experiences of smart travel. Remarkable growth of these data sources has inspired new Strategies to understand the socio-economic phenomenon in different fields. The analytical approach of big data emphasizes the capacity of data collection and analysis with an unprecedented extent, depth and scale for solving the problems of real life and uses it. Indeed, big data analyses open the doors to various opportunities for developing the modern knowledge or changing our understanding of this scope and support decision-making in tourism industry. The purpose of this study is to show helpfulness of big data analysis to discover behavioral patterns in tourism industry and propose a model for employing data in tourism.",
"title": ""
},
{
"docid": "93df3ce5213252f8ae7dbd396ebb71bd",
"text": "Role-Based Access Control (RBAC) has been the dominant access control model in industry since the 1990s. It is widely implemented in many applications, including major cloud platforms such as OpenStack, AWS, and Microsoft Azure. However, due to limitations of RBAC, there is a shift towards Attribute-Based Access Control (ABAC) models to enhance flexibility by using attributes beyond roles and groups. In practice, this shift has to be gradual since it is unrealistic for existing systems to abruptly adopt ABAC models, completely eliminating current RBAC implementations.In this paper, we propose an ABAC extension with user attributes for the OpenStack Access Control (OSAC) model and demonstrate its enforcement utilizing the Policy Machine (PM) developed by the National Institute of Standards and Technology. We utilize some of the PM's components along with a proof-of-concept implementation to enforce this ABAC extension for OpenStack, while keeping OpenStack's current RBAC architecture in place. This provides the benefits of enhancing access control flexibility with support of user attributes, while minimizing the overhead of altering the existing OpenStack access control framework. We present use cases to depict added benefits of our model and show enforcement results. We then evaluate the performance of our proposed ABAC extension, and discuss its applicability and possible performance enhancements.",
"title": ""
},
{
"docid": "1430e6cb8a758d97335af0fc337e0c08",
"text": "Low-cost Radio Frequency Identification (RFID) tags affixed to consumer items as smart labels are emerging as one of the most pervasive computing technologies in history. This presents a number of advantages, but also opens a huge number of security problems that need to be addressed before its successful deployment. Many proposals have recently appeared, but all of them are based on RFID tags using classical cryptographic primitives such as Pseudorandom Number Generators (PRNGs), hash functions, or block ciphers. We believe this assumption to be fairly unrealistic, as classical cryptographic constructions lie well beyond the computational reach of very low-cost RFID tags. A new approach is necessary to tackle the problem, so we propose a minimalist lightweight mutual authentication protocol for low-cost RFID tags that offers an adequate security level for certain applications, which could be implemented even in the most limited low-cost tags as it only needs around 300 gates.",
"title": ""
},
{
"docid": "aed009b4d5cbf184f9eb321c9d2d7e5f",
"text": "A novel and simple half ring monopole antenna is presented here. The proposed antenna has been fed by a microstrip line to provide bandwidth supporting ultra wideband (UWB) characteristics. While decreasing the physical size of the antenna, the parameters that affect the performance of the antenna have been investigated here.",
"title": ""
},
{
"docid": "cb2309b5290572cf7211f69cac7b99e8",
"text": "Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial/magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis/angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking",
"title": ""
},
{
"docid": "493c45304bd5b7dd1142ace56e94e421",
"text": "While closed timelike curves (CTCs) are not known to exist, studying their consequences has led to nontrivial insights in general relativity, quantum information, and other areas. In this paper we show that if CTCs existed, then quantum computers would be no more powerful than classical computers: both would have the (extremely large) power of the complexity class PSPACE, consisting of all problems solvable by a conventional computer using a polynomial amount of memory. This solves an open problem proposed by one of us in 2005, and gives an essentially complete understanding of computational complexity in the presence of CTCs. Following the work of Deutsch, we treat a CTC as simply a region of spacetime where a “causal consistency” condition is imposed, meaning that Nature has to produce a (probabilistic or quantum) fixed-point of some evolution operator. Our conclusion is then a consequence of the following theorem: given any quantum circuit (not necessarily unitary), a fixed-point of the circuit can be (implicitly) computed in polynomial space. This theorem might have independent applications in quantum information.",
"title": ""
},
{
"docid": "e75dccbe66ee79c7e1dee67e3df4dc12",
"text": "In recent years, many publications showed that convolutional neural network based features can have a superior performance to engineered features. However, not much effort was taken so far to extract local features efficiently for a whole image. In this paper, we present an approach to compute patch-based local feature descriptors efficiently in presence of pooling and striding layers for whole images at once. Our approach is generic and can be applied to nearly all existing network architectures. This includes networks for all local feature extraction tasks like camera calibration, Patchmatching, optical flow estimation and stereo matching. In addition, our approach can be applied to other patchbased approaches like sliding window object detection and recognition. We complete our paper with a speed benchmark of popular CNN based feature extraction approaches applied on a whole image, with and without our speedup, and example code (for Torch) that shows how an arbitrary CNN architecture can be easily converted by our approach.",
"title": ""
},
{
"docid": "74d2d780291e9dbf2e725b55ccadd278",
"text": "Organizational climate and organizational culture theory and research are reviewed. The article is first framed with definitions of the constructs, and preliminary thoughts on their interrelationships are noted. Organizational climate is briefly defined as the meanings people attach to interrelated bundles of experiences they have at work. Organizational culture is briefly defined as the basic assumptions about the world and the values that guide life in organizations. A brief history of climate research is presented, followed by the major accomplishments in research on the topic with regard to levels issues, the foci of climate research, and studies of climate strength. A brief overview of the more recent study of organizational culture is then introduced, followed by samples of important thinking and research on the roles of leadership and national culture in understanding organizational culture and performance and culture as a moderator variable in research in organizational behavior. The final section of the article proposes an integration of climate and culture thinking and research and concludes with practical implications for the management of effective contemporary organizations. Throughout, recommendations are made for additional thinking and research.",
"title": ""
},
{
"docid": "53ada9fce2d0af2208c4c312870a2912",
"text": "This paper describes a CMOS capacitive sensing amplifier for a monolithic MEMS accelerometer fabricated by post-CMOS surface micromachining. This chopper stabilized amplifier employs capacitance matching with optimal transistor sizing to minimize sensor noise floor. Offsets due to sensor and circuit are reduced by ac offset calibration and dc offset cancellation based on a differential difference amplifier (DDA). Low-duty-cycle periodic reset is used to establish robust dc bias at the sensing electrodes with low noise. This work shows that continuous-time voltage sensing can achieve lower noise than switched-capacitor charge integration for sensing ultra-small capacitance changes. A prototype accelerometer integrated with this circuit achieves 50g Hz acceleration noise floor and 0.02-aF Hz capacitance noise floor while chopped at 1 MHz.",
"title": ""
},
{
"docid": "a4d3cebea4be0bbb7890c033e7f252c1",
"text": "In this paper, we investigate continuum manipulators that are analogous to conventional rigid-link parallel robot designs. These “parallel continuum manipulators” have the potential to inherit some of the compactness and compliance of continuum robots while retaining some of the precision, stability, and strength of rigid-link parallel robots, yet they represent a relatively unexplored area of the broad manipulator design space. We describe the construction of a prototype manipulator structure with six compliant legs connected in a parallel pattern similar to that of a Stewart-Gough platform. We formulate the static forward and inverse kinematics problems for such manipulators as the solution to multiple Cosserat-rod models with coupled boundary conditions, and we test the accuracy of this approach in a set of experiments, including the prediction of leg buckling. An inverse kinematics simulation of slices through the 6 degree-of-freedom (DOF) workspace illustrates the kinematic mapping, range of motion, and force required for actuation, which sheds light on the potential advantages and tradeoffs that parallel continuum manipulators may bring. Potential applications include miniature wrists and arms for endoscopic medical procedures, and lightweight compliant arms for safe interaction with humans.",
"title": ""
},
{
"docid": "5bb15e64e7e32f3a0b1b99be8b8ab2bf",
"text": "Breast cancer is one of the major causes of death in women when compared to all other cancers. Breast cancer has become the most hazardous types of cancer among women in the world. Early detection of breast cancer is essential in reducing life losses. This paper presents a comparison among the different Data mining classifiers on the database of breast cancer Wisconsin Breast Cancer (WBC), by using classification accuracy. This paper aims to establish an accurate classification model for Breast cancer prediction, in order to make full use of the invaluable information in clinical data, especially which is usually ignored by most of the existing methods when they aim for high prediction accuracies. We have done experiments on WBC data. The dataset is divided into training set with 499 and test set with 200 patients. In this experiment, we compare six classification techniques in Weka software and comparison results show that Support Vector Machine (SVM) has higher prediction accuracy than those methods. Different methods for breast cancer detection are explored and their accuracies are compared. With these results, we infer that the SVM are more suitable in handling the classification problem of breast cancer prediction, and we recommend the use of these approaches in similar classification problems. Keywords—breast cancer; classification; Decision tree, Naïve Bayes, MLP, Logistic Regression SVM, KNN and weka;",
"title": ""
},
{
"docid": "b5b5e87aa833cdabd52f9072296c49f8",
"text": "In the modern e-commerce, the behaviors of customers contain rich information, e.g., consumption habits, the dynamics of preferences. Recently, session-based recommendationsare becoming popular to explore the temporal characteristics of customers' interactive behaviors. However, existing works mainly exploit the short-term behaviors without fully taking the customers' long-term stable preferences and evolutions into account. In this paper, we propose a novel Behavior-Intensive Neural Network (BINN) for next-item recommendation by incorporating both users' historical stable preferences and present consumption motivations. Specifically, BINN contains two main components, i.e., Neural Item Embedding, and Discriminative Behaviors Learning. Firstly, a novel item embedding method based on user interactions is developed for obtaining an unified representation for each item. Then, with the embedded items and the interactive behaviors over item sequences, BINN discriminatively learns the historical preferences and present motivations of the target users. Thus, BINN could better perform recommendations of the next items for the target users. Finally, for evaluating the performances of BINN, we conduct extensive experiments on two real-world datasets, i.e., Tianchi and JD. The experimental results clearly demonstrate the effectiveness of BINN compared with several state-of-the-art methods.",
"title": ""
},
{
"docid": "c0e99b3b346ef219e8898c3608d2664f",
"text": "A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called disocclusion. The general solution is to smooth the depth map using a Gaussian smoothing filter before 3D warping. However, the filtered depth map causes geometric distortion and the depth quality is seriously degraded. Therefore, we propose a new depth map filtering algorithm to solve the disocclusion problem while maintaining the depth quality. In order to preserve the visual quality of the virtual view, we smooth the depth map with further reduced deformation. After extracting object boundaries depending on the position of the virtual view, we apply a discontinuity-adaptive smoothing filter according to the distance of the object boundary and the amount of depth discontinuities. Finally, we obtain the depth map with higher quality compared to other methods. Experimental results showed that the disocclusion is efficiently removed and the visual quality of the virtual view is maintained.",
"title": ""
},
{
"docid": "adb42f43e57458888344dc97bbae9439",
"text": "We present a general picture of the parallel meta-heuristic search for optimization. We recall the main concepts and strategies in designing parallel metaheuristics, pointing to a number of contributions that instantiated them for neighborhoodand populationbased meta-heuristics, and identify trends and promising research directions. We focus on cooperation-based strategies, which display remarkable performances, in particular on asynchronous cooperation and advanced cooperation strategies that create new information out of exchanged data to enhance the global guidance of the search.",
"title": ""
},
{
"docid": "3cd19e73aade3e99fff4b213afd3c678",
"text": "We describe the dialogue model for the virtual humans developed at the Institute for Creative Technologies at the University of Southern California. The dialogue model contains a rich set of information state and dialogue moves to allow a wide range of behaviour in multimodal, multiparty interaction. We extend this model to enable non-team negotiation, using ideas from social science literature on negotiation and implemented strategies and dialogue moves for this area. We present a virtual human doctor who uses this model to engage in multimodal negotiation dialogue with people from other organisations. The doctor is part of the SASO-ST system, used for training for non-team interactions.",
"title": ""
},
{
"docid": "9b2dc34302b69ca863e4bcca26e09c96",
"text": "Two opposing theories have been proposed to explain competitive advantage of firms. First, the market-based view (MBV) is focused on product or market positions and competition while second, the resource-based view (RBV) aims at explaining success by inwardly looking at unique resources and capabilities of a firm. Research has been struggling to distinguish impacts of these theories for illuminating performance. Business models are seen as an important concept to systemize the business and value creation logic of firms by defining different core components. Thus, this paper tries to assess associations between these components and MBV or RBV perspectives by applying content analysis. Two of the business model components were found to have strong links with the MBV while three of them showed indications of their roots lying in the resource-based perspective. These results are discussed and theorized in a final step by suggesting frameworks of the corresponding perspectives for further explaining competitive advantage.",
"title": ""
},
{
"docid": "f74ccd06a302b70980d7b3ba2ee76cfb",
"text": "As the world becomes more connected to the cyber world, attackers and hackers are becoming increasingly sophisticated to penetrate computer systems and networks. Intrusion Detection System (IDS) plays a vital role in defending a network against intrusion. Many commercial IDSs are available in marketplace but with high cost. At the same time open source IDSs are also available with continuous support and upgradation from large user community. Each of these IDSs adopts a different approaches thus may target different applications. This paper provides a quick review of six Open Source IDS tools so that one can choose the appropriate Open Source IDS tool as per their organization requirements.",
"title": ""
}
] |
scidocsrr
|
4cc3c9a39d8ff4e4b6c746b82af187d9
|
Solving real-world cutting stock-problems in the paper industry: Mathematical approaches, experience and challenges
|
[
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "f0a3a1855103ebac224e1351d4fc24df",
"text": "BACKGROUND\nThere have been many randomised trials of adjuvant tamoxifen among women with early breast cancer, and an updated overview of their results is presented.\n\n\nMETHODS\nIn 1995, information was sought on each woman in any randomised trial that began before 1990 of adjuvant tamoxifen versus no tamoxifen before recurrence. Information was obtained and analysed centrally on each of 37000 women in 55 such trials, comprising about 87% of the worldwide evidence. Compared with the previous such overview, this approximately doubles the amount of evidence from trials of about 5 years of tamoxifen and, taking all trials together, on events occurring more than 5 years after randomisation.\n\n\nFINDINGS\nNearly 8000 of the women had a low, or zero, level of the oestrogen-receptor protein (ER) measured in their primary tumour. Among them, the overall effects of tamoxifen appeared to be small, and subsequent analyses of recurrence and total mortality are restricted to the remaining women (18000 with ER-positive tumours, plus nearly 12000 more with untested tumours, of which an estimated 8000 would have been ER-positive). For trials of 1 year, 2 years, and about 5 years of adjuvant tamoxifen, the proportional recurrence reductions produced among these 30000 women during about 10 years of follow-up were 21% (SD 3), 29% (SD 2), and 47% (SD 3), respectively, with a highly significant trend towards greater effect with longer treatment (chi2(1)=52.0, 2p<0.00001). The corresponding proportional mortality reductions were 12% (SD 3), 17% (SD 3), and 26% (SD 4), respectively, and again the test for trend was significant (chi2(1) = 8.8, 2p=0.003). The absolute improvement in recurrence was greater during the first 5 years, whereas the improvement in survival grew steadily larger throughout the first 10 years. The proportional mortality reductions were similar for women with node-positive and node-negative disease, but the absolute mortality reductions were greater in node-positive women. In the trials of about 5 years of adjuvant tamoxifen the absolute improvements in 10-year survival were 10.9% (SD 2.5) for node-positive (61.4% vs 50.5% survival, 2p<0.00001) and 5.6% (SD 1.3) for node-negative (78.9% vs 73.3% survival, 2p<0.00001). These benefits appeared to be largely irrespective of age, menopausal status, daily tamoxifen dose (which was generally 20 mg), and of whether chemotherapy had been given to both groups. In terms of other outcomes among all women studied (ie, including those with \"ER-poor\" tumours), the proportional reductions in contralateral breast cancer were 13% (SD 13), 26% (SD 9), and 47% (SD 9) in the trials of 1, 2, or about 5 years of adjuvant tamoxifen. The incidence of endometrial cancer was approximately doubled in trials of 1 or 2 years of tamoxifen and approximately quadrupled in trials of 5 years of tamoxifen (although the number of cases was small and these ratios were not significantly different from each other). The absolute decrease in contralateral breast cancer was about twice as large as the absolute increase in the incidence of endometrial cancer. Tamoxifen had no apparent effect on the incidence of colorectal cancer or, after exclusion of deaths from breast or endometrial cancer, on any of the other main categories of cause of death (total nearly 2000 such deaths; overall relative risk 0.99 [SD 0.05]).\n\n\nINTERPRETATION\nFor women with tumours that have been reliably shown to be ER-negative, adjuvant tamoxifen remains a matter for research. However, some years of adjuvant tamoxifen treatment substantially improves the 10-year survival of women with ER-positive tumours and of women whose tumours are of unknown ER status, with the proportional reductions in breast cancer recurrence and in mortality appearing to be largely unaffected by other patient characteristics or treatments.",
"title": ""
},
{
"docid": "3a32ac999ea003d992f3dd7d7d41d601",
"text": "Collectively, disruptive technologies and market forces have resulted in a significant shift in the structure of many industries, presenting a serious challenge to near-term profitability and long-term viability. Cloud capabilities continue to promise payoffs in reduced costs and increased efficiencies, but in this article, we show they can provide business model transformation opportunities as well. To date, the focus of much research on cloud computing and cloud services has been on understanding the technology challenges, business opportunities or applications for particular domains.3 Cloud services, however, also offer great new opportunities for small and mediumsized enterprises (SMEs) that lack large IT shops or internal capabilities, as well as larger firms. An early analysis of four SMEs4 found that cloud services can offer both economic and business operational value previously denied them. This distinction is important because it shows that cloud services can provide value beyond simple cost avoidance or reduction",
"title": ""
},
{
"docid": "ebaedd43e151f13d1d4d779284af389d",
"text": "This paper presents the state of art techniques in recommender systems (RS). The various techniques are diagrammatically illustrated which on one hand helps a naïve researcher in this field to accommodate the on-going researches and establish a strong base, on the other hand it focuses on different categories of the recommender systems with deep technical discussions. The review studies on RS are highlighted which helps in understanding the previous review works and their directions. 8 different main categories of recommender techniques and 19 sub categories have been identified and stated. Further, soft computing approach for recommendation is emphasized which have not been well studied earlier. The major problems of the existing area is reviewed and presented from different perspectives. However, solutions to these issues are rarely discussed in the previous works, in this study future direction for possible solutions are also addressed.",
"title": ""
},
{
"docid": "1f94d244dd24bd9261613098c994cf9d",
"text": "With the development and introduction of smart metering, the energy information for costumers will change from infrequent manual meter readings to fine-grained energy consumption data. On the one hand these fine-grained measurements will lead to an improvement in costumers' energy habits, but on the other hand the fined-grained data produces information about a household and also households' inhabitants, which are the basis for many future privacy issues. To ensure household privacy and smart meter information owned by the household inhabitants, load hiding techniques were introduced to obfuscate the load demand visible at the household energy meter. In this work, a state-of-the-art battery-based load hiding (BLH) technique, which uses a controllable battery to disguise the power consumption and a novel load hiding technique called load-based load hiding (LLH) are presented. An LLH system uses an controllable household appliance to obfuscate the household's power demand. We evaluate and compare both load hiding techniques on real household data and show that both techniques can strengthen household privacy but only LLH can increase appliance level privacy.",
"title": ""
},
{
"docid": "7e42516a73e8e5f80d009d0ff305156c",
"text": "This article provides a review of evolutionary theory and empirical research on mate choices in nonhuman species and uses it as a frame for understanding the how and why of human mate choices. The basic principle is that the preferred mate choices and attendant social cognitions and behaviors of both women and men, and those of other species, have evolved to focus on and exploit the reproductive potential and reproductive investment of members of the opposite sex. Reproductive potential is defined as the genetic, material, and/or social resources an individual can invest in offspring, and reproductive investment is the actual use of these resources to enhance the physical and social well- being of offspring. Similarities and differences in the mate preferences and choices of women and men are reviewed and can be understood in terms of similarities and differences in the form of reproductive potential that women and men have to offer and their tendency to use this potential for the well-being of children.",
"title": ""
},
{
"docid": "caea6d9ec4fbaebafc894167cfb8a3d6",
"text": "Although the positive effects of different kinds of physical activity (PA) on cognitive functioning have already been demonstrated in a variety of studies, the role of cognitive engagement in promoting children's executive functions is still unclear. The aim of the current study was therefore to investigate the effects of two qualitatively different chronic PA interventions on executive functions in primary school children. Children (N = 181) aged between 10 and 12 years were assigned to either a 6-week physical education program with a high level of physical exertion and high cognitive engagement (team games), a physical education program with high physical exertion but low cognitive engagement (aerobic exercise), or to a physical education program with both low physical exertion and low cognitive engagement (control condition). Executive functions (updating, inhibition, shifting) and aerobic fitness (multistage 20-m shuttle run test) were measured before and after the respective condition. Results revealed that both interventions (team games and aerobic exercise) have a positive impact on children's aerobic fitness (4-5% increase in estimated VO2max). Importantly, an improvement in shifting performance was found only in the team games and not in the aerobic exercise or control condition. Thus, the inclusion of cognitive engagement in PA seems to be the most promising type of chronic intervention to enhance executive functions in children, providing further evidence for the importance of the qualitative aspects of PA.",
"title": ""
},
{
"docid": "461fbb108d5589621a7ff15fcc306153",
"text": "Current methods for detector gain calibration require acquisition of tens of special calibration images. Here we propose a method that obtains the gain from the actual image for which the photon count is desired by quantifying out-of-band information. We show on simulation and experimental data that our much simpler procedure, which can be retroactively applied to any image, is comparable in precision to traditional gain calibration procedures. Optical recordings consist of detected photons, which typically arrive in an uncorrelated manner at the detector. Therefore the recorded intensity follows a Poisson distribution, where the variance of the photon count is equal to its mean. In many applications images must be further processed based on these statistics and it is therefore of great importance to be able to relate measured values S in analogue-to-digital-units (ADU) to the detected (effective) photon numbers N. The relation between the measured signal S in ADU and the photon count N is given by the linear gain g as S = gN. Only after conversion to photons is it possible to establish the expected precision of intensities in the image, which is essential for single particle localization, maximum-likelihood image deconvolution or denoising [Ober2004, Smith2010, Afanasyev2015, Strohl2015]. The photon count must be established via gain calibration, as most image capturing devices do not directly report the number of detected photons, but a value proportional to the photoelectron charge produced in a photomultiplier tube or collected in a camera pixel. For this calibration typically tens of calibration images are recorded and the linear relationship between mean intensity and its variance is exploited [vanVliet1998]. In current microscopy practise a detector calibration to photon counts is often not done but cannot be performed in retrospect. It thus would be extremely helpful, if that can be determined from analysing the acquisition itself – a single image. A number of algorithms have been published for Gaussian type noise [Donoho1995, Immerkaer1996] and Poissonian type noise [Foi2008, Colom2014, Azzari2014, Pyatykh2014]. However, all these routines use assumed image properties to extract the information rather than just the properties of the acquisition process as in our presented algorithm. This has major implications for their performance on microscopy images governed by photon statistics (see Supplementary Information for a comparison with implementations from Pyatykh et al. [Pyatykh2014] and Azzari et al. [Azzari2014] which performed more than an order of magnitude worse than our method). Some devices, such as avalanche photodiodes, photomultiplier tubes (PMTs) or emCCD cameras can be operated in a single photon counting mode [Chao2013] where the gain is known to be one. In many cases, however, the gain is unknown and/or a device setting. For example, the gain of PMTs can be continuously controlled by changing the voltage between the dynodes and the gain of cameras may deviate from the value stated in the manual. To complicate matters, devices not running in photon counting mode, use an offset Ozero to avoid negative readout values, i.e. the device will yield a non-zero mean value even if no light reaches the detector, S = gN + Ozero. This offset value Ozero is sometimes changing over time (“offset drift”). Traditionally, a series of about 20 dark images and 20 images of a sample with smoothly changing intensity are recorded [vanVliet1998]. From these images the gain is calculated as the linear slope of the variance over these images versus the mean intensity g = var(S)/mean(S) (for details see Supplementary information). In Figure 1 we show a typical calibration curve by fitting (blue line) the experimentally obtained data (blue crosses). The obtained gain does not necessarily correspond to the real gain per detected photon, since it includes multiplicative noise sources such as multiplicative amplification noise, gain fluctuations or the excess noise of emCCDs and PMTs. In addition there is also readout noise, which includes thermal noise build-up and clock induced charge. The unknown readout noise and offset may seem at first glance disadvantageous regarding an automatic quantification. However, as shown below, these details do not matter for the purpose of predicting the correct noise from a measured signal. Let us first assume that we know the offset Ozero and any potential readout noise variance Vread. The region in Fourier space above the cut-off frequency of the support of the optical transfer function only contains noise in an image [Liu2017], where both Poisson and Gaussian noise are evenly distributed over all frequencies [Chanran1990, Liu2017]. By measuring the spectral power density of the noise VHF in this high-frequency out-of-band region and accounting for the area fraction f of this region in Fourier space, we can estimate the total variance Vall=VHF/f of all detected photons. Then the gain g is then obtained as (1) g = !!\"\"!!!\"#$ (!!!!\"#$) where we relate the photon-noise-only variance Vall-Vread to the sum offset-corrected signal over all pixels in the image (see Online Methods). The device manufacturers usually provide the readout noise leaving only the offset and gain to be determined from the image itself in practise. To also estimate both, the offset together with the gain, we need more information from the linear meanvariance dependence than given by equation (1). We achieve this by tiling the input image, e.g. into 3×3 sub-images, and process each of these sub-images to generate one data point in a meanvariance plot. From these nine data points we obtain the axis offset (Ono-noise). We then perform the gain estimation (1) on the whole image after offset correction (See Online Methods and Supplementary Information). As seen from Figure 1 the linear regression of the mean-variance curve determines the axis offset ADU value Ono-noise at which zero noise would be expected. Yet we cannot simultaneously determine both offset Ozero and readout noise Vread . If either of them is known a priori, the other can be calculated: Vread = g(Ozero Ono-noise), which is, however, not needed to predict the correct noise level for each brightness level based on the automatically determined value Ono-noise. To test the single-image gain calibration, simulated data was generated for a range of gains (0.1, 1, 10) with a constant offset (100 ADU), a range of readout noise (1, 2, 10 photon RMS) and maximum expected photon counts per pixel (10, 100, ..., 10). Offset and gain were both determined from band-limited single images of two different objects (resolution target and Einstein) without significant relative errors in the offset or gain (less than 2% at more than 10 expected maximum photon counts) using the proposed method (see Supplementary Figures S1-S3). Figure 1 quantitatively compares the intensity dependent variance predicted by applying our method individually to many single experimental in-focus images (shaded green area) with the classical method evaluating a whole series of calibration images (blue line). Note that our single-image based noise determination does not require any prior knowledge about offset or readout noise. Figure 2 shows a few typical example images acquired with various detectors together with the gain and offset determined from each of them and the calibration values obtained from the standard procedure for comparison. We evaluated the general applicability of our method on datasets from different detectors and modes of acquisition (CCD, emCCD, sCMOS, PMT, GAsP and Hybrid Detector). Figure 3 quantitatively compares experimental single image calibration with classical calibration. 20 individual images were each submitted to our algorithm and the determined offset and gain was compared to the classical method. The variance of a separately acquired dark image was submitted to the algorithm as a readout noise estimate, but alternatively the readout noise specification from the handbook could be used or a measured offset at zero intensity. As seen from Figure 3 the singleimage-based gain calibration as proposed performs nearly as well as the standard gain calibration using 20 images. The relative gain error stays generally well below 10% and for cameras below 2%. The 8.5% bias for the HyD photon counting system is unusually high, and we were unable to find a clear explanation for this deviation from the classical calibration. Using only lower frequencies to estimate VHF (kt =0.4) resulted in a much smaller error of 2.5% in the single-photon counting case suggesting that dead-time effects of the detector might have affected the high spatial frequencies. Simulations as well as experiments show a good agreement of the determined gain with the ground truth or gold standard calibration respectively. The bias of the gain determined by the single-image routine stayed below 4% (except for HyD). For intensity quantification any potential offset must be subtracted before conversion to photon counts. Our method estimates the photon count very precisely over a large range of parameters (relative error below 2% in simulations). Our method could be applied to many different microscopy modes (widefield transmission, fluorescence, and confocal) and detector types (CCD, emCCD, sCMOS, PMT, GAsP and HyD photon counting), because we only require the existence of an out-of-band region, which purely contains frequency independent noise. This is usually true, if the image is sampled correctly. As discussed in the Supplementary Information the cut-off limit of our algorithm can in practise be set below the transfer limit and single-image calibration can even outperform the standard calibration if molecular blinking perturbs the measurement In summary we showed that single image calibration is a simple and versatile tool. We expect our work to lead to a better ability to quantify intensities in general.",
"title": ""
},
{
"docid": "8fac18c1285875aee8e7a366555a4ca3",
"text": "Automatic speech recognition (ASR) has been under the scrutiny of researchers for many years. Speech Recognition System is the ability to listen what we speak, interpreter and perform actions according to spoken information. After so many detailed study and optimization of ASR and various techniques of features extraction, accuracy of the system is still a big challenge. The selection of feature extraction techniques is completely based on the area of study. In this paper, a detailed theory about features extraction techniques like LPC and LPCC is examined. The goal of this paper is to study the comparative analysis of features extraction techniques like LPC and LPCC.",
"title": ""
},
{
"docid": "fa246c15531c6426cccaf4d216dc8375",
"text": "Proboscis lateralis is a rare craniofacial malformation characterized by absence of nasal cavity on one side with a trunk-like nasal appendage protruding from superomedial portion of the ipsilateral orbit. High-resolution computed tomography and magnetic resonance imaging are extremely useful in evaluating this congenital condition and the wide spectrum of associated anomalies occurring in the surrounding anatomical regions and brain. We present a case of proboscis lateralis in a 2-year-old girl with associated ipsilateral sinonasal aplasia, orbital cyst, absent olfactory bulb and olfactory tract. Absence of ipsilateral olfactory pathway in this rare disorder has been documented on high-resolution computed tomography and magnetic resonance imaging by us for the first time in English medical literature.",
"title": ""
},
{
"docid": "db7edbb1a255e9de8486abbf466f9583",
"text": "Nowadays, adopting an optimized irrigation system has become a necessity due to the lack of the world water resource. The system has a distributed wireless network of soil-moisture and temperature sensors. This project focuses on a smart irrigation system which is cost effective. As the technology is growing and changing rapidly, Wireless sensing Network (WSN) helps to upgrade the technology where automation is playing important role in human life. Automation allows us to control various appliances automatically. DC motor based vehicle is designed for irrigation purpose. The objectives of this paper were to control the water supply to each plant automatically depending on values of temperature and soil moisture sensors. Mechanism is done such that soil moisture sensor electrodes are inserted in front of each soil. It also monitors the plant growth using various parameters like height and width. Android app.",
"title": ""
},
{
"docid": "cce5d75bfcfc22f7af08f6b0b599d472",
"text": "In order to determine if exposure to carcinogens in fire smoke increases the risk of cancer, we examined the incidence of cancer in a cohort of 2,447 male firefighters in Seattle and Tacoma, (Washington, USA). The study population was followed for 16 years (1974–89) and the incidence of cancer, ascertained using a population-based tumor registry, was compared with local rates and with the incidence among 1,878 policemen from the same cities. The risk of cancer among firefighters was found to be similar to both the police and the general male population for most common sites. An elevated risk of prostate cancer was observed relative to the general population (standardized incidence ratio [SIR]=1.4, 95 percent confidence interval [CI]=1.1–1.7) but was less elevated compared with rates in policement (incidence density ratio [IDR]=1.1, CI=0.7–1.8) and was not related to duration of exposure. The risk of colon cancer, although only slightly elevated relative to the general population (SIR=1.1, CI=0.7–1.6) and the police (IDR=1.3, CI=0.6–3.0), appeared to increase with duration of employment. Although the relationship between firefighting and colon cancer is consistent with some previous studies, it is based on small numbers and may be due to chance. While this study did not find strong evidence for an excess risk of cancer, the presence of carcinogens in the firefighting environment warrants periodic re-evaluation of cancer incidence in this population and the continued use of protective equipment.",
"title": ""
},
{
"docid": "e28336bccbb1414dc9a92404f08b6b6f",
"text": "YouTube has become one of the largest websites on the Internet. Among its many genres, both professional and amateur science communicators compete for audience attention. This article provides the first overview of science communication on YouTube and examines content factors that affect the popularity of science communication videos on the site. A content analysis of 390 videos from 39 YouTube channels was conducted. Although professionally generated content is superior in number, user-generated content was significantly more popular. Furthermore, videos that had consistent science communicators were more popular than those without a regular communicator. This study represents an important first step to understand content factors, which increases the channel and video popularity of science communication on YouTube.",
"title": ""
},
{
"docid": "4b544bb34c55e663cdc5f0a05201e595",
"text": "BACKGROUND\nThis study seeks to examine a multidimensional model of student motivation and engagement using within- and between-network construct validation approaches.\n\n\nAIMS\nThe study tests the first- and higher-order factor structure of the motivation and engagement wheel and its corresponding measurement tool, the Motivation and Engagement Scale - High School (MES-HS; formerly the Student Motivation and Engagement Scale).\n\n\nSAMPLE\nThe study draws upon data from 12,237 high school students from 38 Australian high schools.\n\n\nMETHODS\nThe hypothesized 11-factor first-order structure and the four-factor higher-order structure, their relationship with a set of between-network measures (class participation, enjoyment of school, educational aspirations), factor invariance across gender and year-level, and the effects of age and gender are examined using confirmatory factor analysis and structural equation modelling.\n\n\nRESULTS\nIn terms of within-network validity, (1) the data confirm that the 11-factor and higher-order factor models of motivation and engagement are good fitting and (2) multigroup tests showed invariance across gender and year levels. In terms of between-network validity, (3) correlations with enjoyment of school, class participation and educational aspirations are in the hypothesized directions, and (4) girls reflect a more adaptive pattern of motivation and engagement, and year-level findings broadly confirm hypotheses that middle high school students seem to reflect a less adaptive pattern of motivation and engagement.\n\n\nCONCLUSION\nThe first- and higher-order structures hold direct implications for educational practice and directions for future motivation and engagement research.",
"title": ""
},
{
"docid": "c1ddf32bfa71f32e51daf31e077a87cd",
"text": "There is a step of significant difficulty experienced by brain-computer interface (BCI) users when going from the calibration recording to the feedback application. This effect has been previously studied and a supervised adaptation solution has been proposed. In this paper, we suggest a simple unsupervised adaptation method of the linear discriminant analysis (LDA) classifier that effectively solves this problem by counteracting the harmful effect of nonclass-related nonstationarities in electroencephalography (EEG) during BCI sessions performed with motor imagery tasks. For this, we first introduce three types of adaptation procedures and investigate them in an offline study with 19 datasets. Then, we select one of the proposed methods and analyze it further. The chosen classifier is offline tested in data from 80 healthy users and four high spinal cord injury patients. Finally, for the first time in BCI literature, we apply this unsupervised classifier in online experiments. Additionally, we show that its performance is significantly better than the state-of-the-art supervised approach.",
"title": ""
},
{
"docid": "40a87654ac33c46f948204fd5c7ef4c1",
"text": "We introduce a novel scheme to train binary convolutional neural networks (CNNs) – CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.",
"title": ""
},
{
"docid": "5c96222feacb0454d353dcaa1f70fb83",
"text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1",
"title": ""
},
{
"docid": "750abc9e51aed62305187d7103e3f267",
"text": "This design paper presents new guidance for creating map legends in a dynamic environment. Our contribution is a set ofguidelines for legend design in a visualization context and a series of illustrative themes through which they may be expressed. Theseare demonstrated in an applications context through interactive software prototypes. The guidelines are derived from cartographicliterature and in liaison with EDINA who provide digital mapping services for UK tertiary education. They enhance approaches tolegend design that have evolved for static media with visualization by considering: selection, layout, symbols, position, dynamismand design and process. Broad visualization legend themes include: The Ground Truth Legend, The Legend as Statistical Graphicand The Map is the Legend. Together, these concepts enable us to augment legends with dynamic properties that address specificneeds, rethink their nature and role and contribute to a wider re-evaluation of maps as artifacts of usage rather than statements offact. EDINA has acquired funding to enhance their clients with visualization legends that use these concepts as a consequence ofthis work. The guidance applies to the design of a wide range of legends and keys used in cartography and information visualization.",
"title": ""
},
{
"docid": "5433a8e449bf4bf9d939e645e171f7e5",
"text": "Software Testing (ST) processes attempt to verify and validate the capability of a software system to meet its required attributes and functionality. As software systems become more complex, the need for automated software testing methods emerges. Machine Learning (ML) techniques have shown to be quite useful for this automation process. Various works have been presented in the junction of ML and ST areas. The lack of general guidelines for applying appropriate learning methods for software testing purposes is our major motivation in this current paper. In this paper, we introduce a classification framework which can help to systematically review research work in the ML and ST domains. The proposed framework dimensions are defined using major characteristics of existing software testing and machine learning methods. Our framework can be used to effectively construct a concrete set of guidelines for choosing the most appropriate learning method and applying it to a distinct stage of the software testing life-cycle for automation purposes.",
"title": ""
},
{
"docid": "4a84fabb0b4edefc1850940ed2081f47",
"text": "Given a large overcomplete dictionary of basis vectors, the goal is to simultaneously represent L>1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where multiple responses exist that were putatively generated by the same small subset of features. Ideally, the associated sparse generating weights should be recovered, which can have physical significance in many applications (e.g., source localization). The generic solution to this problem is intractable and, therefore, approximate procedures are sought. Based on the concept of automatic relevance determination, this paper uses an empirical Bayesian prior to estimate a convenient posterior distribution over candidate basis vectors. This particular approximation enforces a common sparsity profile and consistently places its prominent posterior mass on the appropriate region of weight-space necessary for simultaneous sparse recovery. The resultant algorithm is then compared with multiple response extensions of matching pursuit, basis pursuit, FOCUSS, and Jeffreys prior-based Bayesian methods, finding that it often outperforms the others. Additional motivation for this particular choice of cost function is also provided, including the analysis of global and local minima and a variational derivation that highlights the similarities and differences between the proposed algorithm and previous approaches.",
"title": ""
},
{
"docid": "1720517b913ce3974ab92239ff8a177e",
"text": "Honeypot is a closely monitored computer resource that emulates behaviors of production host within a network in order to lure and attract the attackers. The workability and effectiveness of a deployed honeypot depends on its technical configuration. Since honeypot is a resource that is intentionally made attractive to the attackers, it is crucial to make it intelligent and self-manageable. This research reviews at artificial intelligence techniques such as expert system and case-based reasoning, in order to build an intelligent honeypot.",
"title": ""
}
] |
scidocsrr
|
ae96c68dac549b555cef65579a2e7fc3
|
End-to-end Relation Extraction using Neural Networks and Markov Logic Networks
|
[
{
"docid": "470ecc2bc4299d913125d307c20dd48d",
"text": "The task of end-to-end relation extraction consists of two sub-tasks: i) identifying entity mentions along with their types and ii) recognizing semantic relations among the entity mention pairs. It has been shown that for better performance, it is necessary to address these two sub-tasks jointly [22,13]. We propose an approach for simultaneous extraction of entity mentions and relations in a sentence, by using inference in Markov Logic Networks (MLN) [21]. We learn three different classifiers : i) local entity classifier, ii) local relation classifier and iii) “pipeline” relation classifier which uses predictions of the local entity classifier. Predictions of these classifiers may be inconsistent with each other. We represent these predictions along with some domain knowledge using weighted first-order logic rules in an MLN and perform joint inference over the MLN to obtain a global output with minimum inconsistencies. Experiments on the ACE (Automatic Content Extraction) 2004 dataset demonstrate that our approach of joint extraction using MLNs outperforms the baselines of individual classifiers. Our end-to-end relation extraction performance is better than 2 out of 3 previous results reported on the ACE 2004 dataset.",
"title": ""
},
{
"docid": "2088c56bb59068a33de09edc6831e74b",
"text": "We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional treestructured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the stateof-the-art feature-based model on end-toend relation extraction, achieving 3.5% and 4.8% relative error reductions in F1score on ACE2004 and ACE2005, respectively. We also show a 2.5% relative error reduction in F1-score over the state-ofthe-art convolutional neural network based model on nominal relation classification (SemEval-2010 Task 8).",
"title": ""
}
] |
[
{
"docid": "d82c1a529aa8e059834bc487fcfebd24",
"text": "Web attacks are nowadays one of the major threats on the Internet, and several studies have analyzed them, providing details on how they are performed and how they spread. However, no study seems to have sufficiently analyzed the typical behavior of an attacker after a website has been",
"title": ""
},
{
"docid": "aace50c8446403a9f72b24bce1e88c30",
"text": "This paper presents a model-driven approach to the development of web applications based on the Ubiquitous Web Application (UWA) design framework, the Model-View-Controller (MVC) architectural pattern and the JavaServer Faces technology. The approach combines a complete and robust methodology for the user-centered conceptual design of web applications with the MVC metaphor, which improves separation of business logic and data presentation. The proposed approach, by carrying the advantages of ModelDriven Development (MDD) and user-centered design, produces Web applications which are of high quality from the user's point of view and easier to maintain and evolve.",
"title": ""
},
{
"docid": "a1bff389a9a95926a052ded84c625a9e",
"text": "Automatically assessing the subjective quality of a photo is a challenging area in visual computing. Previous works study the aesthetic quality assessment on a general set of photos regardless of the photo's content and mainly use features extracted from the entire image. In this work, we focus on a specific genre of photos: consumer photos with faces. This group of photos constitutes an important part of consumer photo collections. We first conduct an online study on Mechanical Turk to collect ground-truth and subjective opinions for a database of consumer photos with faces. We then extract technical features, perceptual features, and social relationship features to represent the aesthetic quality of a photo, by focusing on face-related regions. Experiments show that our features perform well for categorizing or predicting the aesthetic quality.",
"title": ""
},
{
"docid": "009a7247ef27758f6c303cea8108dae1",
"text": "We describe a method for automatic generation of a learning path for education or selfeducation. As a knowledge base, our method uses the semantic structure view from Wikipedia, leveraging on its broad variety of covered concepts. We evaluate our results by comparing them with the learning paths suggested by a group of teachers. Our algorithm is a useful tool for instructional design process.",
"title": ""
},
{
"docid": "00bbfb52c5c54d83ea31fed1ec85b1a2",
"text": "We propose, analyze, and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new half-quadratic model applicable to not only the anisotropic but also the isotropic forms of TV discretizations. The per-iteration computational complexity of the algorithm is three fast Fourier transforms. We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or q-linear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several state-of-the-art algorithms. In particular, it runs orders of magnitude faster than the lagged diffusivity algorithm for TV-based deblurring. Some extensions of our algorithm are also discussed.",
"title": ""
},
{
"docid": "4bb7720583a1a33b2dff5d7a994b44af",
"text": "Automatic License Plate Recognition (ALPR) systems capture a vehicle‟s license plate and recognize the license number and other required information from the captured image. ALPR systems have numbers of significant applications: law enforcement, public safety agencies, toll gate systems, etc. The goal of these systems is to recognize the characters and state on the license plate with high accuracy. ALPR has been implemented using various techniques. Traditional recognition methods use handcrafted features for obtaining features from the image. Unlike conventional methods, deep learning techniques automatically select features and are one of the game changing technologies in the field of computer vision, automatic recognition tasks and natural language processing. Some of the most successful deep learning methods involve Convolutional Neural Networks. This technique applies deep learning techniques to the ALPR problem of recognizing the state and license number from the USA license plate. Existing ALPR systems include three stages of",
"title": ""
},
{
"docid": "5f77218388ee927565a993a8e8c48ef3",
"text": "The paper presents an idea of Lexical Platform proposed as a means for a lightweight integration of various lexical resources into one complex (from the perspective of non-technical users). All LRs will be represented as software web components implementing a minimal set of predefined programming interfaces providing functionality for querying and generating simple common presentation format. A common data format for the resources will not be required. Users will be able to search, browse and navigate via resources on the basis of anchor elements of a limited set of types. Lexical resources linked to the platform via components will preserve their identity.",
"title": ""
},
{
"docid": "f811a281efec4eb6b9f703ebb420407b",
"text": "Hospital workers are highly mobile; they are constantly changing location to perform their daily work, which includes visiting patients, locating resources, such as medical records, or consulting with other specialists. The information required by these specialists is highly dependent on their location. Access to a patient's laboratory results might be more relevant when the physician is near the patient's bed and not elsewhere. We describe a location-aware medical information system that was developed to provide access to resources such as patient's records or the location of a medical specialist, based on the user's location. The system is based on a handheld computer which includes a trained backpropagation neural-network used to estimate the user's location and a client to access information from the hospital information system that is relevant to the user's current location.",
"title": ""
},
{
"docid": "db2cd0762b560faf3aaf5e27ad3e13a1",
"text": "Soil is an excellent niche of growth of many microorganisms: protozoa, fungi, viruses, and bacteria. Some microorganisms are able to colonize soil surrounding plant roots, the rhizosphere, making them come under the influence of plant roots (Hiltner 1904; Kennedy 2005). These bacteria are named rhizobacteria. Rhizobacteria are rhizosphere competent bacteria able to multiply and colonize plant roots at all stages of plant growth, in the presence of a competing microflora (Antoun and Kloepper 2001) where they are in contact with other microorganisms. This condition is wildly encountered in natural, non-autoclaved soils. Generally, interactions between plants and microorganisms can be classified as pathogenic, saprophytic, and beneficial (Lynch 1990). Beneficial interactions involve plant growth promoting rhizobacteria (PGPR), generally refers to a group of soil and rhizosphere free-living bacteria colonizing roots in a competitive environment and exerting a beneficial effect on plant growth (Kloepper and Schroth 1978; Lazarovits and Nowak 1997; Kloepper et al. 1989; Kloepper 2003; Bakker et al. 2007). However, numerous researchers tend to enlarge this restrictive definition of rhizobacteria as any root-colonizing bacteria and consider endophytic bacteria in symbiotic association: Rhizobia with legumes and the actinomycete Frankia associated with some phanerogams as PGPR genera. Among PGPRs are representatives of the following genera: Acinetobacter, Agrobacterium, Arthrobacter, Azoarcus, Azospirillum, Azotobacter, Bacillus, Burkholderia, Enterobacter, Klebsiella, Pseudomonas, Rhizobium, Serratia, and Thiobacillus. Some of these genera such as Azoarcus spp., Herbaspirillum, and Burkholderia include endophytic species.",
"title": ""
},
{
"docid": "46849f5c975551b401bccae27edd9d81",
"text": "Many ideas of High Performance Computing are applicable to Big Data problems. The more so now, that hybrid, GPU computing gains traction in mainstream computing applications. This work discusses the differences between the High Performance Computing software stack and the Big Data software stack and then focuses on two popular computing workloads, the Alternating Least Squares algorithm and the Singular Value Decomposition, and shows how their performance can be maximized using hybrid computing techniques.",
"title": ""
},
{
"docid": "fe94f4795d43572b27bbe27db5537e5c",
"text": "Event-related desynchronization (ERD) 2.0 sec before and 1.0 sec after movement in the frequency bands of 8-10, 10-12, 12-20 and 20-30 Hz and movement-related cortical potentials (MRCPs) to self-paced movements were studied from subdural recordings over the central region in 3 patients, and from scalp-recorded EEGs in 20 normal volunteers. In direct cortical recordings, the peak ERD response and peak MRCP amplitude to self-paced finger movements were maximal over recording sites in the contralateral hand motor representations. The topography and time of onset of the ERD response to finger and foot movements suggest that the ERD responses in the 8-10 Hz and 10-12 Hz bands are more somatotopically restricted than the responses in the higher frequency bands. The power recovery and subsequent overshoot in the different frequency bands occurred in an orderly fashion with the faster frequencies recovering earlier. The ERD responses on the scalp-recorded EEGs were of lower magnitude and more widely distributed than those occurring on the subdural recordings. Across the population, there was no relation between the magnitude of the ERD response in any of the frequency bands studied and the peak amplitude of the negative slope (pNS') and the frontal peak of the motor potential (fpMP) of the MRCPs. MRCPs and ERD responses originate in similar cortical regions and share some common timing features, but the magnitude and spatial distribution of the two responses appear to be independent of each other, which suggests that the physiological mechanisms governing these two events are different and may represent different aspects of motor cortex activation. Differences in the timing and topographical features of the ERD responses in the various frequency bands also suggest a distinct functional significance for the various spectral components of the electrical activity in the motor cortex.",
"title": ""
},
{
"docid": "b9be60146ace98fe90b6ac82a57a4a89",
"text": "OBJECTIVE\nTo examine the specificity of low CSF hypocretin-1 levels in narcolepsy and explore the potential role of hypocretins in other neurologic disorders.\n\n\nMETHODS\nA method to measure hypocretin-1 in 100 microL of crude CSF sample was established and validated. CSF hypocretin-1 was measured in 42 narcolepsy patients (ages 16-70 years), 48 healthy controls (ages 22-77 years,) and 235 patients with various other neurologic conditions (ages 0-85 years).\n\n\nRESULTS\nAs previously reported, CSF hypocretin-1 levels were undetectably low (<100 pg/mL) in 37 of 42 narcolepsy subjects. Hypocretin-1 levels were detectable in all controls (224-653 pg/mL) and all neurologic patients (117-720 pg/mL), with the exception of three patients with Guillain-Barré syndrome (GBS). Hypocretin-1 was within the control range in most neurologic patients tested, including patients with AD, PD, and MS. Low but detectable levels (100-194 pg/mL) were found in a subset of patients with acute lymphocytic leukemia, intracranial tumors, craniocerebral trauma, CNS infections, and GBS.\n\n\nCONCLUSIONS\nUndetectable CSF hypocretin-1 levels are highly specific to narcolepsy and rare cases of GBS. Measuring hypocretin-1 levels in the CSF of patients suspected of narcolepsy is a useful diagnostic procedure. Low hypocretin levels are also observed in a large range of neurologic conditions, most strikingly in subjects with head trauma. These alterations may reflect focal lesions in the hypothalamus, destruction of the blood brain barrier, or transient or chronic hypofunction of the hypothalamus. Future research in this area is needed to establish functional significance.",
"title": ""
},
{
"docid": "f3c6b42ed65b38708b12d46c48af4f0b",
"text": "Data are often labeled by many different experts with each expert only labeling a small fraction of the data and each data point being labeled by several experts. This reduces the workload on individual experts and also gives a better estimate of the unobserved ground truth. When experts disagree, the standard approaches are to treat the majority opinion as the correct label and to model the correct label as a distribution. These approaches, however, do not make any use of potentially valuable information about which expert produced which label. To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in samplespecific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. Here we show that our approach leads to improvements in computeraided diagnosis of diabetic retinopathy. We also show that our method performs better than competing algorithms by Welinder and Perona (2010); Mnih and Hinton (2012). Our work offers an innovative approach for dealing with the myriad real-world settings that use expert opinions to define labels",
"title": ""
},
{
"docid": "c52d31c7ae39d1a7df04140e920a26d2",
"text": "In the past half-decade, Amazon Mechanical Turk has radically changed the way many scholars do research. The availability of a massive, distributed, anonymous crowd of individuals willing to perform general human-intelligence micro-tasks for micro-payments is a valuable resource for researchers and practitioners. This paper addresses the challenges of obtaining quality annotations for subjective judgment oriented tasks of varying difficulty. We design and conduct a large, controlled experiment (N=68,000) to measure the efficacy of selected strategies for obtaining high quality data annotations from non-experts. Our results point to the advantages of person-oriented strategies over process-oriented strategies. Specifically, we find that screening workers for requisite cognitive aptitudes and providing training in qualitative coding techniques is quite effective, significantly outperforming control and baseline conditions. Interestingly, such strategies can improve coder annotation accuracy above and beyond common benchmark strategies such as Bayesian Truth Serum (BTS).",
"title": ""
},
{
"docid": "8b416a37b319153eca38105c6de3fd2a",
"text": "UNSUPERVISED ANOMALY DETECTION IN SEQUENCES USING LONG SHORT TERM MEMORY RECURRENT NEURAL NETWORKS Majid S. alDosari George Mason University, 2016 Thesis Director: Dr. Kirk D. Borne Long Short Term Memory (LSTM) recurrent neural networks (RNNs) are evaluated for their potential to generically detect anomalies in sequences. First, anomaly detection techniques are surveyed at a high level so that their shortcomings are exposed. The shortcomings are mainly their inflexibility in the use of a context ‘window’ size and/or their suboptimal performance in handling sequences. Furthermore, high-performing techniques for sequences are usually associated with their respective knowledge domains. After discussing these shortcomings, RNNs are exposed mathematically as generic sequence modelers that can handle sequences of arbitrary length. From there, results from experiments using RNNs show their ability to detect anomalies in a set of test sequences. The test sequences had different types of anomalies and unique normal behavior. Given the characteristics of the test data, it was concluded that the RNNs were not only able to generically distinguish rare values in the data (out of context) but were also able to generically distinguish abnormal patterns (in context). In addition to the anomaly detection work, a solution for reproducing computational research is described. The solution addresses reproducing compute applications based on Docker container technology as well as automating the infrastructure that runs the applications. By design, the solution allows the researcher to seamlessly transition from local (test) application execution to remote (production) execution because little distinction is made between local and remote execution. Such flexibility and automation allows the researcher to be more confident of results and more productive, especially when dealing with multiple machines. Chapter 1: Introduction In the modern world, large amounts of time series data of various types are recorded. Inexpensive and compact instrumentation and storage allows various types of processes to be recorded. For example, human activity being recorded includes physiological signals, automotive traffic, website navigation activity, and communication network traffic. Other kinds of data are captured from instrumentation in industrial processes, automobiles, space probes, telescopes, geological formations, oceans, power lines, and residential thermostats. Furthermore, the data can be machine generated for diagnostic purposes such as web server logs, system startup logs, and satellite status logs. Increasingly, these data are being analyzed. Inexpensive and ubiquitous networking has allowed the data to be transmitted for processing. At the same time, ubiquitous computing has allowed the data to be processed at the location of capture. While the data can be recorded for historical purposes, much value can be obtained from finding anomalous data. However, it is challenging to manually analyze large and varied quantities of data to find anomalies. Even if a procedure can be developed for one type of data, it usually cannot be applied to another type of data. Hence, the problem that is addressed can be stated as follows: find anomalous points in an arbitrary (unlabeled) sequence. So, a solution must use the same procedure to analyze different types of time series data. The solution presented here comes from an unsupervised use of recurrent neural networks. A literature search only readily gives two similar solutions. In the acoustics domain, [1] ¬ In this document, the terms ‘time series’ and ‘sequence’ are used interchangeably without implication to the discussion. Strictly however, a time series is a sequence of time-indexed elements. So a sequence is the more general object. As such, the term ‘sequence’ is used when a general context is more applicable. Furthermore, the terms do not imply that the data are real, discrete, or symbolic. However, literature frequently uses the terms ‘time series’ and ‘sequence’ for real and symbolic data respectively. Here, the term ‘time series’ was used to emphasize that much data is recorded from monitoring devices which implies that a timestamp is associated with each data point. 1 transform audio signals into a sequence of spectral features which are then input to a denoising recurrent autoencoder. Improving on this, [2] use recurrent neural networks (directly) without the use of features (that are specific to a problem domain, like acoustics) to multiple domains. This work closely resembles [2] but presenting a single, highly-automated procedure that applies to many domains is emphasized. First, some background is given on anomaly detection that explains the challenges of finding a solution. Second, recurrent neural networks are introduced as general sequence modelers. Then, experiments will be presented to show that recurrent neural networks can find different types of anomalies in multiple domains. Finally, concluding remarks are given. Outlier, surprise, novelty, and deviation detection are alternative names used in literature. 2 Chapter 2: The Challenge of Anomaly Detection in Sequences",
"title": ""
},
{
"docid": "cc124a93db48348e37aacac87081e3d4",
"text": "The design of an ultra-wideband crossover for use in printed microwave circuits is presented. It employs a pair of broadside-coupled microstrip-to-coplanar waveguide (CPW) transitions, and a pair of uniplanar microstrip-to-CPW transitions. A lumped-element equivalent circuit is used to explain the operation of the proposed crossover. Its performance is evaluated via full-wave electromagnetic simulations and measurements. The designed device is constructed on a single substrate, and thus, it is fully compatible with microstrip-based microwave circuits. The crossover is shown to operate across the frequency band from 3.1 to 11 GHz with more than 15 dB of isolation, less than 1 dB of insertion loss, and less than 0.1 ns of deviation in the group delay.",
"title": ""
},
{
"docid": "f99f522836431aae3e3f98564bcfc125",
"text": "Malaysia is a developing country and government’s urbanization policy in 1980s has encouraged migration of rural population to urban centres, consistent with the shift of economy orientation from agriculture base to industrial base. At present about 60% Malaysian live in urban areas. Live demands and labour shortage in industrial sector have forced mothers to join labour force. At present there are about 65% mothers with children below 15 years of age working fulltime outside homes. Issues related to parenting and children’s development becomes crucial especially in examination oriented society like Malaysia. Using 200 families as sample this study attempted to examine effects of parenting styles of dual-earner families on children behaviour and school achievement. Results of the study indicates that for mothers and fathers authoritative style have positive effects on children behaviour and school achievement. In contrast, the permissive and authoritarian styles have negative effects on children behaviour and school achievement. Effects of findings on children development are discussed.",
"title": ""
},
{
"docid": "7f92ead5b555e9447e44ad73392c25d1",
"text": "Multiple antenna systems are a useful way of overcoming the effects of multipath interference, and can allow more efficient use of spectrum. In order to test the effectiveness of various algorithms such as diversity combining, phased array processing, and adaptive array processing in an indoor environment, a channel model is needed which models both the time and angle of arrival in indoor environments. Some data has been collected indoors and some temporal models have been proposed, but no existing model accounts for both time and angle of arrival. This paper discusses existing models for the time of arrival, experimental data that were collected indoors, and a proposed extension of the Saleh-Valenzuela model [1], which accounts for the angle of arrival. Model parameters measured in two different buildings are compared with the parameters presented in the paper by Saleh and Valenzuela, and some statistical validation of the model is presented.",
"title": ""
},
{
"docid": "1062f37de56db35202f8979a7ea88efd",
"text": "This paper attempts to evaluate the anti-inflammatory potential and the possible mechanism of action of the leaf extracts and isolated compound(s) of Aerva sanguinolenta (Amaranthaceae), traditionally used in ailments related to inflammation. The anti-inflammatory activity of ethanol extract (ASE) was evaluated by acute, subacute and chronic models of inflammation, while a new cerebroside (‘trans’, ASE-1), isolated from the bioactive ASE and characterized spectroscopically, was tested by carrageenan-induced mouse paw oedema and protein exudation model. To understand the underlying mechanism, we measured the release of pro-inflammatory mediators such as nitric oxide (NO) and prostaglandin (PG)E2, along with the cytokines like tumour necrosis factor (TNF)-α, and interleukins(IL)-1β, IL-6 and IL-12 from lipopolysaccharide (LPS)-stimulated peritoneal macrophages. The results revealed that ASE at 400 mg/kg caused significant reduction of rat paw oedema, granuloma and exudative inflammation, while the inhibition of mouse paw oedema and exudative inflammation by ASE-1 (20 mg/kg) was comparable to that of the standard drug indomethacin (10 mg/kg). Interestingly, both ASE and ASE-1 showed significant inhibition of the expressions of iNOS2 and COX-2, and the down-regulation of the expressions of IL-1β, IL-6, IL-12 and TNF-α, in LPS-stimulated macrophages, via the inhibition of COX-2-mediated PGE2 release. Thus, our results validated the traditional use of A. sanguinolenta leaves in inflammation management.",
"title": ""
},
{
"docid": "07ec93308d91268506643ba4e4018085",
"text": "Existing code similarity comparison methods, whether source or binary code based, are mostly not resilient to obfuscations. Identifying similar or identical code fragments among programs is very important in some applications. For example, one application is to detect illegal code reuse. In the code theft cases, emerging obfuscation techniques have made automated detection increasingly difficult. Another application is to identify cryptographic algorithms which are widely employed by modern malware to circumvent detection, hide network communications, and protect payloads among other purposes. Due to diverse coding styles and high programming flexibility, different implementation of the same algorithm may appear very distinct, causing automatic detection to be very hard, let alone code obfuscations are sometimes applied. In this paper, we propose a binary-oriented, obfuscation-resilient binary code similarity comparison method based on a new concept, longest common subsequence of semantically equivalent basic blocks , which combines rigorous program semantics with longest common subsequence based fuzzy matching. We model the semantics of a basic block by a set of symbolic formulas representing the input-output relations of the block. This way, the semantic equivalence (and similarity) of two blocks can be checked by a theorem prover. We then model the semantic similarity of two paths using the longest common subsequence with basic blocks as elements. This novel combination has resulted in strong resiliency to code obfuscation. We have developed a prototype. The experimental results show that our method can be applied to software plagiarism and algorithm detection, and is effective and practical to analyze real-world software.",
"title": ""
}
] |
scidocsrr
|
305542b453075f284bf65c67079082c5
|
Title : Towards a common framework for knowledge co-creation : opportunities for collaboration between Service Science and Sustainability Science Track : Viable Systems Approach
|
[
{
"docid": "9f20a4117c3e09250af9e9c3de4d37de",
"text": "Service-dominant logic (S-D logic) is contrasted with goods-dominant (G-D) logic to provide a framework for thinking more clearly about the concept of service and its role in exchange and competition. Then, relying upon the nine foundational premises of S-D logic [Vargo, Stephen L. and Robert F. Lusch (2004). “Evolving to a New Dominant Logic for Marketing,†Journal of Marketing, 68 (January) 1–17; Lusch, Robert F. and Stephen L. Vargo (2006), “Service-Dominant Logic as a Foundation for Building a General Theory,†in The Service-Dominant Logic of Marketing: Dialog, Debate and Directions. Robert F. Lusch and Stephen L. Vargo (eds.), Armonk, NY: M.E. Sharpe, 406–420] nine derivative propositions are developed that inform marketers on how to compete through service. a c, 2 Purchase Export",
"title": ""
}
] |
[
{
"docid": "6aaa2b6cc2593ee2f65623ddb9c84f4c",
"text": "We propose a large dataset for machine learning-based automatic keyphrase extraction. The dataset has a high quality and consist of 2,000 of scientific papers from computer science domain published by ACM. Each paper has its keyphrases assigned by the authors and verified by the reviewers. Different parts of papers, such as title and abstract, are separated, enabling extraction based on a part of an article's text. The content of each paper is converted from PDF to plain text. The pieces of formulae, tables, figures and LaTeX mark up were removed automatically. For removal we have used Maximum Entropy Model-based machine learning and achieved 97.04% precision. Preliminary investigation with help of the state of the art keyphrase extraction system KEA shows keyphrases recognition accuracy improvement for refined texts.",
"title": ""
},
{
"docid": "5948af3805969eb3b9e1cca4c8a5957c",
"text": "Force-controllable actuators are essential for guaranteeing safety in human–robot interactions. Magnetic lead screws (MLSs) transfer force without requiring contact between parts. These devices can drive the parts with high efficiency and no frictional contact, and they are force limited when overloaded. We have developed a novel MLS that does not include spiral permanent magnets and an MLS-driven linear actuator (MLSDLA) that uses this device. This simple structure reduces the overall size of the device and improves productivity because it is constructed by a commonly used machined screw as a screw. The actuator can drive back against an external force and it moves flexibly based on the magnetic spring effect. In this paper, we propose a force estimation method for the MLSDLA that does not require separate sensors. The magnetic phase difference, as measured from the angular and linear displacements of the actuator, is used for this calculation. The estimated force is then compared against measurements recorded with a load sensor in order to verify the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "164e5bde10882e3f7a6bcdf473eb7387",
"text": "This paper is written for (social science) researchers seeking to analyze the wealth of social media now available. It presents a comprehensive review of software tools for social networking media, wikis, really simple syndication feeds, blogs, newsgroups, chat and news feeds. For completeness, it also includes introductions to social media scraping, storage, data cleaning and sentiment analysis. Although principally a review, the paper also provides a methodology and a critique of social media tools. Analyzing social media, in particular Twitter feeds for sentiment analysis, has become a major research and business activity due to the availability of web-based application programming interfaces (APIs) provided by Twitter, Facebook and News services. This has led to an ‘explosion’ of data services, software tools for scraping and analysis and social media analytics platforms. It is also a research area undergoing rapid change and evolution due to commercial pressures and the potential for using social media data for computational (social science) research. Using a simple taxonomy, this paper provides a review of leading software tools and how to use them to scrape, cleanse and analyze the spectrum of social media. In addition, it discussed the requirement of an experimental computational environment for social media research and presents as an illustration the system architecture of a social media (analytics) platform built by University College London. The principal contribution of this paper is to provide an overview (including code fragments) for scientists seeking to utilize social media scraping and analytics either in their research or business. The data retrieval techniques that are presented in this paper are valid at the time of writing this paper (June 2014), but they are subject to change since social media data scraping APIs are rapidly changing.",
"title": ""
},
{
"docid": "10172bbb61d404eb38a898bafadb5021",
"text": "Numerical code uses floating-point arithmetic and necessarily suffers from roundoff and truncation errors. Error analysis is the process to quantify such uncertainty in the solution to a problem. Forward error analysis and backward error analysis are two popular paradigms of error analysis. Forward error analysis is more intuitive and has been explored and automated by the programming languages (PL) community. In contrast, although backward error analysis is more preferred by numerical analysts and the foundation for numerical stability, it is less known and unexplored by the PL community. To fill the gap, this paper presents an automated backward error analysis for numerical code to empower both numerical analysts and application developers. In addition, we use the computed backward error results to also compute the condition number, an important quantity recognized by numerical analysts for measuring how sensitive a function is to changes or errors in the input. Experimental results on Intel X87 FPU functions and widely-used GNU C Library functions demonstrate that our analysis is effective at analyzing the accuracy of floating-point programs.",
"title": ""
},
{
"docid": "18a5e6686a26a2f17c65a217022163b1",
"text": "This paper proposes the first derivation, implementation, and experimental validation of light field image-based visual servoing. Light field image Jacobians are derived based on a compact light field feature representation that is close to the form measured directly by light field cameras. We also enhance feature detection and correspondence by enforcing light field geometry constraints, and directly estimate the image Jacobian without knowledge of point depth. The proposed approach is implemented over a standard visual servoing control loop, and applied to a custom-mirror-based light field camera mounted on a robotic arm. Light field image-based visual servoing is then validated in both simulation and experiment. We show that the proposed method outperforms conventional monocular and stereo image-based visual servoing under field-of-view constraints and occlusions.",
"title": ""
},
{
"docid": "17ebf9f15291a3810d57771a8c669227",
"text": "We describe preliminary work toward applying a goal reasoning agent for controlling an underwater vehicle in a partially observable, dynamic environment. In preparation for upcoming at-sea tests, our investigation focuses on a notional scenario wherein a autonomous underwater vehicle pursuing a survey goal unexpectedly detects the presence of a potentially hostile surface vessel. Simulations suggest that Goal Driven Autonomy can successfully reason about this scenario using only the limited computational resources typically available on underwater robotic platforms.",
"title": ""
},
{
"docid": "b15b88a31cc1762618ca976bdf895d57",
"text": "How can we build agents that keep learning from experience, quickly and efficiently, after their initial training? Here we take inspiration from the main mechanism of learning in biological brains: synaptic plasticity, carefully tuned by evolution to produce efficient lifelong learning. We show that plasticity, just like connection weights, can be optimized by gradient descent in large (millions of parameters) recurrent networks with Hebbian plastic connections. First, recurrent plastic networks with more than two million parameters can be trained to memorize and reconstruct sets of novel, high-dimensional (1,000+ pixels) natural images not seen during training. Crucially, traditional non-plastic recurrent networks fail to solve this task. Furthermore, trained plastic networks can also solve generic meta-learning tasks such as the Omniglot task, with competitive results and little parameter overhead. Finally, in reinforcement learning settings, plastic networks outperform a non-plastic equivalent in a maze exploration task. We conclude that differentiable plasticity may provide a powerful novel approach to the learning-to-learn problem.",
"title": ""
},
{
"docid": "dec3f821a1f9fc8102450a4add31952b",
"text": "Homicide by hanging is an extremely rare incident [1]. Very few cases have been reported in which a person is rendered senseless and then hanged to simulate suicidal death; though there are a lot of cases in wherein a homicide victim has been hung later. We report a case of homicidal hanging of a young Sikh individual found hanging in a well. It became evident from the results of forensic autopsy that the victim had first been given alcohol mixed with pesticides and then hanged by his turban from a well. The rare combination of lynching (homicidal hanging) and use of organo-phosporous pesticide poisoning as a means of homicide are discussed in this paper.",
"title": ""
},
{
"docid": "24b2cedc9512566e44f9fd7e1acf8a85",
"text": "This paper presents an alternative visual authentication scheme with two secure layers for desktops or laptops. The first layer is a recognition-based scheme that addresses human factors for protection against bots by recognizing a Captcha and images with specific patterns. The second layer uses a clicked based Cued-Recall graphical password scheme for authentication, it also exploits emotions perceived by humans and use them as decision factor. The proposed authentication system is effective against brute-force, online guessing and relay attacks. We believe that the perception of security is enhaced using human emotions as main decision factor. The proposed scheme usability was tested using the Computer System Usability Questionnaires, results showed that it is highly usable and could improve the security level on ATM machines.",
"title": ""
},
{
"docid": "4f64e7ff2bed569d73da9cae011e995d",
"text": "Recent progress in semantic segmentation has been driven by improving the spatial resolution under Fully Convolutional Networks (FCNs). To address this problem, we propose a Stacked Deconvolutional Network (SDN) for semantic segmentation. In SDN, multiple shallow deconvolutional networks, which are called as SDN units, are stacked one by one to integrate contextual information and bring the fine recovery of localization information. Meanwhile, inter-unit and intra-unit connections are designed to assist network training and enhance feature fusion since the connections improve the flow of information and gradient propagation throughout the network. Besides, hierarchical supervision is applied during the upsampling process of each SDN unit, which enhances the discrimination of feature representations and benefits the network optimization. We carry out comprehensive experiments and achieve the new state-ofthe- art results on four datasets, including PASCAL VOC 2012, CamVid, GATECH, COCO Stuff. In particular, our best model without CRF post-processing achieves an intersection-over-union score of 86.6% in the test set.",
"title": ""
},
{
"docid": "f7e45feaa48b8d7741ac4cdb3ef4749b",
"text": "Classification problems refer to the assignment of some alt ern tives into predefined classes (groups, categories). Such problems often arise in several application fields. For instance, in assessing credit card applications the loan officer must evaluate the charact eristics of each applicant and decide whether an application should be accepted or rejected. Simil ar situations are very common in fields such as finance and economics, production management (fault diagnosis) , medicine, customer satisfaction measurement, data base management and retrieval, etc.",
"title": ""
},
{
"docid": "3fa0ab962ec54cea182a293810cf7ce8",
"text": "Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have. When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new ‘disease’, female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times). ‘But,’ the news editor wanted to know, ‘was this paper peer reviewed?’. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)",
"title": ""
},
{
"docid": "5c8923335dd4ee4c2123b5b3245fb595",
"text": "Virtualization is a key enabler of Cloud computing. Due to the numerous vulnerabilities in current implementations of virtualization, security is the major concern of Cloud computing. In this paper, we propose an enhanced security framework to detect intrusions at the virtual network layer of Cloud. It combines signature and anomaly based techniques to detect possible attacks. It uses different classifiers viz; naive bayes, decision tree, random forest, extra trees and linear discriminant analysis for an efficient and effective detection of intrusions. To detect distributed attacks at each cluster and at whole Cloud, it collects intrusion evidences from each region of Cloud and applies Dempster-Shafer theory (DST) for final decision making. We analyze the proposed security framework in terms of Cloud IDS requirements through offline simulation using different intrusion datasets.",
"title": ""
},
{
"docid": "8e1a63bc8cb3d329af03849c5b3aafd3",
"text": "First Sight, a vision system in labeling the outline of a moving human body, is proposed in this paper. The emphasis of First Sight is on the analysis of motion information gathered solely from the outline of a moving human object. Two main processes are implemented in First Sight. The first process uses a novel technique to extract the outline of a moving human body from an image sequence. The second process, which employs a new human body model, interprets the outline and produces a labeled two-dimensional human body stick figure for each frame of the image sequence. Extensive knowledge of the structure, shape, and posture of the human body is used in the model. The experimental results of applying the technique on unedited image sequences with self-occlusions and missing boundary lines are encouraging. Index Items-Coincidence edge, difference picture, human body, human body model, labeling, model, motion, outline, pose, posture, ribbon, stick figure.",
"title": ""
},
{
"docid": "216b169897d93939e64b552e4422aa69",
"text": "The ideal treatment of the nasolabial fold, the tear trough, the labiomandibular fold and the mentolabial sulcus is still discussed controversially. The detailed topographical anatomy of the fat compartments may clarify the anatomy of facial folds and may offer valuable information for choosing the adequate treatment modality. Nine non-fixed cadaver heads in the age range between 72 and 89 years (five female and four male) were investigated. Computed tomographic scans were performed after injection of a radiographic contrast medium directly into the fat compartments surrounding prominent facial folds. The data were analysed after multiplanar image reconstruction. The fat compartments surrounding the facial folds could be defined in each subject. Different arrangement patterns of the fat compartments around the facial rhytides were found. The nasolabial fold, the tear trough and the labiomandibular fold represent an anatomical border between adjacent fat compartments. By contrast, the glabellar fold and the labiomental sulcus have no direct relation to the boundaries of facial fat. Deep fat, underlying a facial rhytide, was identified underneath the nasolabial crease and the labiomental sulcus. In conclusion, an improvement by a compartment-specific volume augmentation of the nasolabial fold, the tear trough and the labiomandibular fold is limited by existing boundaries that extend into the skin. In the area of the nasolabial fold and the mentolabial sulcus, deep fat exists which can be used for augmentation and subsequent elevation of the folds. The treatment of the tear trough deformity appears anatomically the most challenging area since the superficial and deep fat compartments are separated by an osseo-cutaneous barrier, the orbicularis retaining ligament. In severe cases, a surgical treatment should be considered. By contrast, the glabellar fold shows the most simple anatomical architecture. The fold lies above one subcutaneous fat compartment that can be used for augmentation.",
"title": ""
},
{
"docid": "9b8317646ce6cad433e47e42198be488",
"text": "OBJECTIVE\nDigital mental wellbeing interventions are increasingly being used by the general public as well as within clinical treatment. Among these, mindfulness and meditation programs delivered through mobile device applications are gaining popularity. However, little is known about how people use and experience such applications and what are the enabling factors and barriers to effective use. To address this gap, the study reported here sought to understand how users adopt and experience a popular mobile-based mindfulness intervention.\n\n\nMETHODS\nA qualitative semi-structured interview study was carried out with 16 participants aged 25-38 (M=32.5) using the commercially popular mindfulness application Headspace for 30-40days. All participants were employed and living in a large UK city. The study design and interview schedule were informed by an autoethnography carried out by the first author for thirty days before the main study began. Results were interpreted in terms of the Reasoned Action Approach to understand behaviour change.\n\n\nRESULTS\nThe core concern of users was fitting the application into their busy lives. Use was also influenced by patterns in daily routines, on-going reflections about the consequences of using the app, perceived self-efficacy, emotion and mood states, personal relationships and social norms. Enabling factors for use included positive attitudes towards mindfulness and use of the app, realistic expectations and positive social influences. Barriers to use were found to be busy lifestyles, lack of routine, strong negative emotions and negative perceptions of mindfulness.\n\n\nCONCLUSIONS\nMobile wellbeing interventions should be designed with consideration of people's beliefs, affective states and lifestyles, and should be flexible to meet the needs of different users. Designers should incorporate features in the design of applications that manage expectations about use and that support users to fit app use into a busy lifestyle. The Reasoned Action Approach was found to be a useful theory to inform future research and design of persuasive mental wellbeing technologies.",
"title": ""
},
{
"docid": "49e2963e84967100deee8fc810e053ba",
"text": "We have developed a method for rigidly aligning images of tubes. This paper presents an evaluation of the consistency of that method for three-dimensional images of human vasculature. Vascular images may contain alignment ambiguities, poorly corresponding vascular networks, and non-rigid deformations, yet the Monte Carlo experiments presented in this paper show that our method registers vascular images with sub-voxel consistency in a matter of seconds. Furthermore, we show that the method's insensitivity to non-rigid deformations enables the localization, quantification, and visualization of those deformations. Our method aligns a source image with a target image by registering a model of the tubes in the source image directly with the target image. Time can be spent to extract an accurate model of the tubes in the source image. Multiple target images can then be registered with that model without additional extractions. Our registration method builds upon the principles of our tubular object segmentation work that combines dynamic-scale central ridge traversal with radius estimation. In particular, our registration method's consistency stems from incorporating multi-scale ridge and radius measures into the model-image match metric. Additionally, the method's speed is due in part to the use of coarse-to-fine optimization strategies that are enabled by measures made during model extraction and by the parameters inherent to the model-image match metric.",
"title": ""
},
{
"docid": "49f132862ca2c4a07d6233e8101a87ff",
"text": "Genetic data as a category of personal data creates a number of challenges to the traditional understanding of personal data and the rules regarding personal data processing. Although the peculiarities of and heightened risks regarding genetic data processing were recognized long before the data protection reform in the EU, the General Data Protection Regulation (GDPR) seems to pay no regard to this. Furthermore, the GDPR will create more legal grounds for (sensitive) personal data (incl. genetic data) processing whilst restricting data subjects’ means of control over their personal data. One of the reasons for this is that, amongst other aims, the personal data reform served to promote big data business in the EU. The substantive clauses of the GDPR concerning big data, however, do not differentiate between the types of personal data being processed. Hence, like all other categories of personal data, genetic data is subject to the big data clauses of the GDPR as well; thus leading to the question whether the GDPR is creating a pathway for ‘big genetic data’. This paper aims to analyse the implications that the role of the GDPR as a big data enabler bears on genetic data processing and the respective rights of the data",
"title": ""
},
{
"docid": "5542f4693a4251edcf995e7608fbda56",
"text": "This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs—customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-ofmouth promotion and willingness to pay more. © 2002 by New York University. All rights reserved.",
"title": ""
},
{
"docid": "43d9566553ecf29c72cdac7466aab9dc",
"text": "This paper presents an integrated approach for the automatic extraction of rectangularand circularshape buildings from high-resolution optical spaceborne images using the integration of support vector machine (SVM) classification, Hough transformation and perceptual grouping. The building patches are detected from the image using the binary SVM classification. The generated normalized digital surface model (nDSM) and the normalized difference vegetation index (NDVI) are incorporated in the classification process as additional bands. After detecting the building patches, the building boundaries are extracted through sequential processing of edge detection, Hough transformation and perceptual grouping. Those areas that are classified as building are masked and further processing operations are performed on the masked areas only. The edges of the buildings are detected through an edge detection algorithm that generates a binary edge image of the building patches. These edges are then converted into vector form through Hough transform and the buildings are constructed by means of perceptual grouping. To validate the developed method, experiments were conducted on pan-sharpened and panchromatic Ikonos imagery, covering the selected test areas in Batikent district of Ankara, Turkey. For the test areas that contain industrial buildings, the average building detection percentage (BDP) and quality percentage (QP) values were computed to be 93.45% and 79.51%, respectively. For the test areas that contain residential rectangular-shape buildings, the average BDP and QP values were computed to be 95.34% and 79.05%, respectively. For the test areas that contain residential circular-shape buildings, the average BDP and QP values were found to be 78.74% and 66.81%, respectively. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
452bfe889d01dccd523ba2c49603cab6
|
Modeling and Control of Three-Port DC/DC Converter Interface for Satellite Applications
|
[
{
"docid": "8b70670fa152dbd5185e80136983ff12",
"text": "This letter proposes a novel converter topology that interfaces three power ports: a source, a bidirectional storage port, and an isolated load port. The proposed converter is based on a modified version of the isolated half-bridge converter topology that utilizes three basic modes of operation within a constant-frequency switching cycle to provide two independent control variables. This allows tight control over two of the converter ports, while the third port provides the power balance in the system. The switching sequence ensures a clamping path for the energy of the leakage inductance of the transformer at all times. This energy is further utilized to achieve zero-voltage switching for all primary switches for a wide range of source and load conditions. Basic steady-state analysis of the proposed converter is included, together with a suggested structure for feedback control. Key experimental results are presented that validate the converter operation and confirm its ability to achieve tight independent control over two power processing paths. This topology promises significant savings in component count and losses for power-harvesting systems. The proposed topology and control is particularly relevant to battery-backed power systems sourced by solar or fuel cells",
"title": ""
}
] |
[
{
"docid": "6718aa3480c590af254a120376822d07",
"text": "This paper proposes a novel method for content-based watermarking based on feature points of an image. At each feature point, the watermark is embedded after scale normalization according to the local characteristic scale. Characteristic scale is the maximum scale of the scale-space representation of an image at the feature point. By binding watermarking with the local characteristics of an image, resilience against a5ne transformations can be obtained easily. Experimental results show that the proposed method is robust against various image processing steps including a5ne transformations, cropping, 7ltering and JPEG compression. ? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ea048488791219be809072862a061444",
"text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .",
"title": ""
},
{
"docid": "060c1f1e08624c3b59610f150d6f27f8",
"text": "As graph models are applied to more widely varying fields, researchers struggle with tools for exploring and analyzing these structures. We describe GUESS, a novel system for graph exploration that combines an interpreted language with a graphical front end that allows researchers to rapidly prototype and deploy new visualizations. GUESS also contains a novel, interactive interpreter that connects the language and interface in a way that facilities exploratory visualization tasks. Our language, Gython, is a domain-specific embedded language which provides all the advantages of Python with new, graph specific operators, primitives, and shortcuts. We highlight key aspects of the system in the context of a large user survey and specific, real-world, case studies ranging from social and knowledge networks to distributed computer network analysis.",
"title": ""
},
{
"docid": "211484ec722f4df6220a86580d7ecba8",
"text": "The widespread use of vision-based surveillance systems has inspired many research efforts on people localization. In this paper, a series of novel image transforms based on the vanishing point of vertical lines is proposed for enhancement of the probabilistic occupancy map (POM)-based people localization scheme. Utilizing the characteristic that the extensions of vertical lines intersect at a vanishing point, the proposed transforms, based on image or ground plane coordinate system, aims at producing transformed images wherein each standing/walking person will have an upright appearance. Thus, the degradation in localization accuracy due to the deviation of camera configuration constraint specified can be alleviated, while the computation efficiency resulted from the applicability of integral image can be retained. Experimental results show that significant improvement in POM-based people localization for more general camera configurations can indeed be achieved with the proposed image transforms.",
"title": ""
},
{
"docid": "41b6bff4b6f3be41903725e39f630722",
"text": "Despite the huge research on crowd on behavior understanding in visual surveillance community, lack of publicly available realistic datasets for evaluating crowd behavioral interaction led not to have a fair common test bed for researchers to compare the strength of their methods in the real scenarios. This work presents a novel crowd dataset contains around 45,000 video clips which annotated by one of the five different fine-grained abnormal behavior categories. We also evaluated two state-of-the-art methods on our dataset, showing that our dataset can be effectively used as a benchmark for fine-grained abnormality detection. The details of the dataset and the results of the baseline methods are presented in the paper.",
"title": ""
},
{
"docid": "58b5be2fadbaacfb658f7d18cec807d3",
"text": "As the growth of rapid prototyping techniques shortens the development life cycle of software and electronic products, usability inquiry methods can play a more significant role during the development life cycle, diagnosing usability problems and providing metrics for making comparative decisions. A need has been realized for questionnaires tailored to the evaluation of electronic mobile products, wherein usability is dependent on both hardware and software as well as the emotional appeal and aesthetic integrity of the design. This research followed a systematic approach to develop a new questionnaire tailored to measure the usability of electronic mobile products. The Mobile Phone Usability Questionnaire (MPUQ) developed throughout this series of studies evaluates the usability of mobile phones for the purpose of making decisions among competing variations in the end-user market, alternatives of prototypes during the development process, and evolving versions during an iterative design process. In addition, the questionnaire can serve as a tool for identifying diagnostic information to improve specific usability dimensions and related interface elements. Employing the refined MPUQ, decision making models were developed using Analytic Hierarchy Process (AHP) and linear regression analysis. Next, a new group of representative mobile users was employed to develop a hierarchical model representing the usability dimensions incorporated in the questionnaire and to assign priorities to each node in the hierarchy. Employing the AHP and regression models, important usability dimensions and questionnaire items for mobile products were identified. Finally, a case study of comparative usability evaluations was performed to validate the MPUQ and models. A computerized support tool was developed to perform redundancy and relevancy analyses for the selection of appropriate questionnaire items. The weighted geometric mean was used to combine multiple numbers of matrices from pairwise comparison based on decision makers’ consistency ratio values for AHP. The AHP and regression models provided important usability dimensions so that mobile device usability practitioners can simply focus on the interface elements related to the decisive usability dimensions in order to improve the usability",
"title": ""
},
{
"docid": "2e29301adf162bb5e9fecea50a25a85a",
"text": "The collection and combination of assessment data in trustworthiness evaluation of cloud service is challenging, notably because QoS value may be missing in offline evaluation situation due to the time-consuming and costly cloud service invocation. Considering the fact that many trustworthiness evaluation problems require not only objective measurement but also subjective perception, this paper designs a novel framework named CSTrust for conducting cloud service trustworthiness evaluation by combining QoS prediction and customer satisfaction estimation. The proposed framework considers how to improve the accuracy of QoS value prediction on quantitative trustworthy attributes, as well as how to estimate the customer satisfaction of target cloud service by taking advantages of the perception ratings on qualitative attributes. The proposed methods are validated through simulations, demonstrating that CSTrust can effectively predict assessment data and release evaluation results of trustworthiness. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7165568feac9cc0bc0c1056b930958b8",
"text": "We describe a 63-year-old woman with an asymptomatic papular eruption on the vulva. Clinically, the lesions showed multiple pin-head-sized whitish papules on the labia major. Histologically, the biopsy specimen showed acantholysis throughout the epidermis with the presence of dyskeratotic cells resembling corps ronds and grains, hyperkeratosis and parakeratosis. These clinical and histological findings were consistent with the diagnosis of papular acantholytic dyskeratosis of the vulva which is a rare disorder, first described in 1984.",
"title": ""
},
{
"docid": "3e1690ae4d61d87edb0e4c3ce40f6a88",
"text": "Despite previous efforts in auditing software manually and automatically, buffer overruns are still being discovered in programs in use. A dynamic bounds checker detects buffer overruns in erroneous software before it occurs and thereby prevents attacks from corrupting the integrity of the system. Dynamic buffer overrun detectors have not been adopted widely because they either (1) cannot guard against all buffer overrun attacks, (2) break existing code, or (3) incur too high an overhead. This paper presents a practical detector called CRED (C Range Error Detector) that avoids each of these deficiencies. CRED finds all buffer overrun attacks as it directly checks for the bounds of memory accesses. Unlike the original referent-object based bounds-checking technique, CRED does not break existing code because it uses a novel solution to support program manipulation of out-of-bounds addresses. Finally, by restricting the bounds checks to strings in a program, CRED’s overhead is greatly reduced without sacrificing protection in the experiments we performed. CRED is implemented as an extension of the GNU C compiler version 3.3.1. The simplicity of our design makes possible a robust implementation that has been tested on over 20 open-source programs, comprising over 1.2 million lines of C code. CRED proved effective in detecting buffer overrun attacks on programs with known vulnerabilities, and is the only tool found to guard against a testbed of 20 different buffer overflow attacks[34]. Finding overruns only on strings impose an overhead of less This research was performed while the first author was at Stanford University, and this material is based upon work supported in part by the National Science Foundation under Grant No. 0086160. than 26% for 14 of the programs, and an overhead of up to 130% for the remaining six, while the previous state-ofthe-art bounds checker by Jones and Kelly breaks 60% of the programs and is 12 times slower. Incorporating wellknown techniques for optimizing bounds checking into CRED could lead to further performance improvements.",
"title": ""
},
{
"docid": "59a49feef4e3a79c5899fede208a183c",
"text": "This study proposed and tested a model of consumer online buying behavior. The model posits that consumer online buying behavior is affected by demographics, channel knowledge, perceived channel utilities, and shopping orientations. Data were collected by a research company using an online survey of 999 U.S. Internet users, and were cross-validated with other similar national surveys before being used to test the model. Findings of the study indicated that education, convenience orientation, Página 1 de 20 Psychographics of the Consumers in Electronic Commerce 11/10/01 http://www.ascusc.org/jcmc/vol5/issue2/hairong.html experience orientation, channel knowledge, perceived distribution utility, and perceived accessibility are robust predictors of online buying status (frequent online buyer, occasional online buyer, or non-online buyer) of Internet users. Implications of the findings and directions for future research were discussed.",
"title": ""
},
{
"docid": "21943e640ce9b56414994b5df504b1a6",
"text": "It is a preferable method to transfer power wirelessly using contactless slipring systems for rotary applications. The current single or multiple-unit single-phase systems often have limited power transfer capability, so they may not be able to meet the load requirements. This paper presents a contactless slipring system based on axially traveling magnetic field that can achieve a high output power level. A new index termed mutual inductance per pole is introduced to simplify the analysis of the mutually coupled poly-phase system to a single-phase basis. Both simulation and practical results have shown that the proposed system can transfer 2.7 times more power than a multiple-unit (six individual units) single-phase system with the same amount of ferrite and copper materials at higher power transfer efficiency. It has been found that the new system can achieve about 255.6 W of maximum power at 97% efficiency, compared to 68.4 W at 90% of a multiple-unit (six individual units) single-phase system.",
"title": ""
},
{
"docid": "caa7ecc11fc36950d3e17be440d04010",
"text": "In this paper, a comparative study of routing protocols is performed in a hybrid network to recommend the best routing protocol to perform load balancing for Internet traffic. Open Shortest Path First (OSPF), Interior Gateway Routing Protocol (IGRP) and Intermediate System to Intermediate System (IS-IS) routing protocols are compared in OPNET modeller 14 to investigate their capability of ensuring fair distribution of traffic in a hybrid network. The network simulated is scaled to a campus. The network loads are varied in size and performance study is made by running simulations with all the protocols. The only considered performance factors for observation are packet drop, network delay, throughput and network load. IGRP presented better performance as compared to other protocols. The benefit of using IGRP is reduced packet drop, reduced network delay, increased throughput while offering relative better distribution of traffic in a hybrid network.",
"title": ""
},
{
"docid": "a74081f7108e62fadb48446255dd246b",
"text": "Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep fuzzy neural network, namely deep evolving fuzzy neural networks (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play little role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely Generic Classifier (gClass), drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of input space dimension due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using six datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four state-ofthe art data stream methods and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.",
"title": ""
},
{
"docid": "2c442933c4729e56e5f4f46b5b8071d6",
"text": "Wireless body area networks consist of several devices placed on the human body, sensing vital signs and providing remote recognition of health disorders. Low power consumption is crucial in these networks. A new energy-efficient topology is provided in this paper, considering relay and sensor nodes' energy consumption and network maintenance costs. In this topology design, relay nodes, placed on the cloth, are used to help the sensor nodes forwarding data to the sink. Relay nodes' situation is determined such that the relay nodes' energy consumption merges the uniform distribution. Simulation results show that the proposed method increases the lifetime of the network with nearly uniform distribution of the relay nodes' energy consumption. Furthermore, this technique simultaneously reduces network maintenance costs and continuous replacements of the designer clothing. The proposed method also determines the way by which the network traffic is split and multipath routed to the sink.",
"title": ""
},
{
"docid": "48088cbe2f40cbbb32beb53efa224f3b",
"text": "Pain is a nonmotor symptom that substantially affects the quality of life of at least one-third of patients with Parkinson disease (PD). Interestingly, patients with PD frequently report different types of pain, and a successful approach to distinguish between these pains is required so that effective treatment strategies can be established. Differences between these pains are attributable to varying peripheral pain mechanisms, the role of motor symptoms in causing or amplifying pain, and the role of PD pathophysiology in pain processing. In this Review, we propose a four-tier taxonomy to improve classification of pain in PD. This taxonomy assigns nociceptive, neuropathic and miscellaneous pains to distinct categories, as well as further characterization into subcategories. Currently, treatment of pain in PD is based on empirical data only, owing to a lack of controlled studies. The facultative symptom of 'dopaminergically maintained pain' refers to pain that benefits from antiparkinson medication. Here, we also present additional pharmacological and nonpharmacological treatment approaches, which can be targeted to a specific pain following classification using our taxonomy.",
"title": ""
},
{
"docid": "936cdd4b58881275485739518ccb4f85",
"text": "Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems — BN’s error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN’s usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN’s computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code.",
"title": ""
},
{
"docid": "9fe93bda131467c7851d75644de83534",
"text": "The Banking industry has undergone a dramatic change since internet penetration and the concept of internet banking. Internet banking is defined as an internet portal, through which customers can use different kinds of banking services. Internet banking has major effects on banking relationships. The primary objective of this research is to identify the factors that influence internet banking adoption. Using PLS, a model is successfully proved and it is found that internet banking is influenced by its perceived reliability, Perceived ease of use and Perceived usefulness. In the marketing process of internet banking services marketing experts should emphasize these benefits its adoption provides and awareness can also be improved to attract consumers’ attention to internet banking services. Factors Influencing Consumer Adoption of Internet Banking in India 1 Assistant professor, Karunya School of Management, Karunya University, Coimbatore, India. Email: [email protected]",
"title": ""
},
{
"docid": "959c3d0aaa3c17ab43f0362fd03f7b98",
"text": "In this thesis, channel estimation techniques are studied and investigated for a novel multicarrier modulation scheme, Universal Filtered Multi-Carrier (UFMC). UFMC (a.k.a. UFOFDM) is considered as a candidate for the 5th Generation of wireless communication systems, which aims at replacing OFDM and enhances system robustness and performance in relaxed synchronization condition e.g. time-frequency misalignment. Thus, it may more efficiently support Machine Type Communication (MTC) and Internet of Things (IoT), which are considered as challenging applications for next generation of wireless communication systems. There exist many methods of channel estimation, time-frequency synchronization and equalization for classical CP-OFDM systems. Pilot-aided methods known from CP-OFDM are adopted and applied to UFMC systems. The performance of UFMC is then compared with CP-OFDM.",
"title": ""
},
{
"docid": "9b8e9b5fa9585cf545d6ab82483c9f38",
"text": "A survey of bacterial and archaeal genomes shows that many Tn7-like transposons contain minimal type I-F CRISPR-Cas systems that consist of fused cas8f and cas5f, cas7f, and cas6f genes and a short CRISPR array. Several small groups of Tn7-like transposons encompass similarly truncated type I-B CRISPR-Cas. This minimal gene complement of the transposon-associated CRISPR-Cas systems implies that they are competent for pre-CRISPR RNA (precrRNA) processing yielding mature crRNAs and target binding but not target cleavage that is required for interference. Phylogenetic analysis demonstrates that evolution of the CRISPR-Cas-containing transposons included a single, ancestral capture of a type I-F locus and two independent instances of type I-B loci capture. We show that the transposon-associated CRISPR arrays contain spacers homologous to plasmid and temperate phage sequences and, in some cases, chromosomal sequences adjacent to the transposon. We hypothesize that the transposon-encoded CRISPR-Cas systems generate displacement (R-loops) in the cognate DNA sites, targeting the transposon to these sites and thus facilitating their spread via plasmids and phages. These findings suggest the existence of RNA-guided transposition and fit the guns-for-hire concept whereby mobile genetic elements capture host defense systems and repurpose them for different stages in the life cycle of the element.",
"title": ""
}
] |
scidocsrr
|
6ca533a904ec1622f69593cff72dd8e8
|
Indirect content privacy surveys: measuring privacy without asking about it
|
[
{
"docid": "575da85b3675ceaec26143981dbe9b53",
"text": "People are increasingly required to disclose personal information to computerand Internetbased systems in order to register, identify themselves or simply for the system to work as designed. In the present paper, we outline two different methods to easily measure people’s behavioral self-disclosure to web-based forms. The first, the use of an ‘I prefer not to say’ option to sensitive questions is shown to be responsive to the manipulation of level of privacy concern by increasing the salience of privacy issues, and to experimental manipulations of privacy. The second, blurring or increased ambiguity was used primarily by males in response to an income question in a high privacy condition. Implications for the study of self-disclosure in human–computer interaction and web-based research are discussed. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "1c832140fce684c68fd91779d62596e3",
"text": "The safety and antifungal efficacy of amphotericin B lipid complex (ABLC) were evaluated in 556 cases of invasive fungal infection treated through an open-label, single-patient, emergency-use study of patients who were refractory to or intolerant of conventional antifungal therapy. All 556 treatment episodes were evaluable for safety. During the course of ABLC therapy, serum creatinine levels significantly decreased from baseline (P < .02). Among 162 patients with serum creatinine values > or = 2.5 mg/dL at the start of ABLC therapy (baseline), the mean serum creatinine value decreased significantly from the first week through the sixth week (P < or = .0003). Among the 291 mycologically confirmed cases evaluable for therapeutic response, there was a complete or partial response to ABLC in 167 (57%), including 42% (55) of 130 cases of aspergillosis, 67% (28) of 42 cases of disseminated candidiasis, 71% (17) of 24 cases of zygomycosis, and 82% (9) of 11 cases of fusariosis. Response rates varied according to the pattern of invasive fungal infection, underlying condition, and reason for enrollment (intolerance versus progressive infection). These findings support the use of ABLC in the treatment of invasive fungal infections in patients who are intolerant of or refractory to conventional antifungal therapy.",
"title": ""
},
{
"docid": "a338df86cf504d246000c42512473f93",
"text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.",
"title": ""
},
{
"docid": "671573d5f3fc356ee0a5a3e373d6a52f",
"text": "This paper presents a fuzzy logic control for a speed control of DC induction motor. The simulation developed by using Fuzzy MATLAB Toolbox and SIMULINK. The fuzzy logic controller is also introduced to the system for keeping the motor speed to be constant when the load varies. Because of the low maintenance and robustness induction motors have many applications in the industries. The speed control of induction motor is more important to achieve maximum torque and efficiency. The result of the 3x3 matrix fuzzy control rules and 5x5 matrix fuzzy control rules of the theta and speed will do comparison in this paper. Observation the effects of the fuzzy control rules on the performance of the DC- induction motor-speed control.",
"title": ""
},
{
"docid": "872d06c4d3702d79cb1c7bcbc140881a",
"text": "Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed. Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.\nExisting noninferential, formatted data systems provide users with tree-structured files or slightly more general network models of the data. In Section 1, inadequacies of these models are discussed. A model based on n-ary relations, a normal form for data base relations, and the concept of a universal data sublanguage are introduced. In Section 2, certain operations on relations (other than logical inference) are discussed and applied to the problems of redundancy and consistency in the user's model.",
"title": ""
},
{
"docid": "90dfa19b821aeab985a96eba0c3037d3",
"text": "Carcass mass and carcass clothing are factors of potential high forensic importance. In casework, corpses differ in mass and kind or extent of clothing; hence, a question arises whether methods for post-mortem interval estimation should take these differences into account. Unfortunately, effects of carcass mass and clothing on specific processes in decomposition and related entomological phenomena are unclear. In this article, simultaneous effects of these factors are analysed. The experiment followed a complete factorial block design with four levels of carcass mass (small carcasses 5–15 kg, medium carcasses 15.1–30 kg, medium/large carcasses 35–50 kg, large carcasses 55–70 kg) and two levels of carcass clothing (clothed and unclothed). Pig carcasses (N = 24) were grouped into three blocks, which were separated in time. Generally, carcass mass revealed significant and frequently large effects in almost all analyses, whereas carcass clothing had only minor influence on some phenomena related to the advanced decay. Carcass mass differently affected particular gross processes in decomposition. Putrefaction was more efficient in larger carcasses, which manifested itself through earlier onset and longer duration of bloating. On the other hand, active decay was less efficient in these carcasses, with relatively low average rate, resulting in slower mass loss and later onset of advanced decay. The average rate of active decay showed a significant, logarithmic increase with an increase in carcass mass, but only in these carcasses on which active decay was driven solely by larval blowflies. If a blowfly-driven active decay was followed by active decay driven by larval Necrodes littoralis (Coleoptera: Silphidae), which was regularly found in medium/large and large carcasses, the average rate showed only a slight and insignificant increase with an increase in carcass mass. These results indicate that lower efficiency of active decay in larger carcasses is a consequence of a multi-guild and competition-related pattern of this process. Pattern of mass loss in large and medium/large carcasses was not sigmoidal, but rather exponential. The overall rate of decomposition was strongly, but not linearly, related to carcass mass. In a range of low mass decomposition rate increased with an increase in mass, then at about 30 kg, there was a distinct decrease in rate, and again at about 50 kg, the rate slightly increased. Until about 100 accumulated degree-days larger carcasses gained higher total body scores than smaller carcasses. Afterwards, the pattern was reversed; moreover, differences between classes of carcasses enlarged with the progress of decomposition. In conclusion, current results demonstrate that cadaver mass is a factor of key importance for decomposition, and as such, it should be taken into account by decomposition-related methods for post-mortem interval estimation.",
"title": ""
},
{
"docid": "51179905a1ded4b38d7ba8490fbdac01",
"text": "Psychology—the way learning is defined, studied, and understood—underlies much of the curricular and instructional decision-making that occurs in education. Constructivism, perhaps the most current psychology of learning, is no exception. Initially based on the work of Jean Piaget and Lev Vygotsky, and then supported and extended by contemporary biologists and cognitive scientists, it is having major ramifications on the goals teachers set for the learners with whom they work, the instructional strategies teachers employ in working towards these goals, and the methods of assessment utilized by school personnel to document genuine learning. What is this theory of learning and development that is the basis of the current reform movement and how is it different from other models of psychology?",
"title": ""
},
{
"docid": "1fc10d626c7a06112a613f223391de26",
"text": "The question of what makes a face attractive, and whether our preferences come from culture or biology, has fascinated scholars for centuries. Variation in the ideals of beauty across societies and historical periods suggests that standards of beauty are set by cultural convention. Recent evidence challenges this view, however, with infants as young as 2 months of age preferring to look at faces that adults find attractive (Langlois et al., 1987), and people from different cultures showing considerable agreement about which faces are attractive (Cun-for a review). These findings raise the possibility that some standards of beauty may be set by nature rather than culture. Consistent with this view, specific preferences have been identified that appear to be part of our biological rather than Such a preference would be adaptive if stabilizing selection operates on facial traits (Symons, 1979), or if averageness is associated with resistance to pathogens , as some have suggested Evolutionary biologists have proposed that a preference for symmetry would also be adaptive because symmetry is a signal of health and genetic quality Only high-quality individuals can maintain symmetric development in the face of environmental and genetic stresses. Symmetric bodies are certainly attractive to humans and many other animals but what about symmetric faces? Biologists suggest that facial symmetry should be attractive because it may signal mate quality High levels of facial asymmetry in individuals with chro-mosomal abnormalities (e.g., Down's syndrome and Tri-somy 14; for a review, see Thornhill & Møller, 1997) are consistent with this view, as is recent evidence that facial symmetry levels correlate with emotional and psychological health (Shackelford & Larsen, 1997). In this paper, we investigate whether people can detect subtle differences in facial symmetry and whether these differences are associated with differences in perceived attractiveness. Recently, Kowner (1996) has reported that faces with normal levels of asymmetry are more attractive than perfectly symmetric versions of the same faces. 3 Similar results have been reported by Langlois et al. and an anonymous reviewer for helpful comments on an earlier version of the manuscript. We also thank Graham Byatt for assistance with stimulus construction, Linda Jeffery for assistance with the figures, and Alison Clark and Catherine Hickford for assistance with data collection and statistical analysis in Experiment 1A. Evolutionary, as well as cultural, pressures may contribute to our perceptions of facial attractiveness. Biologists predict that facial symmetry should be attractive, because it may signal …",
"title": ""
},
{
"docid": "fbfd3294cfe070ac432bf087fc382b18",
"text": "The alignment of business and information technology (IT) strategies is an important and enduring theoretical challenge for the information systems discipline, remaining a top issue in practice over the past 20 years. Multi-business organizations (MBOs) present a particular alignment challenge because business strategies are developed at the corporate level, within individual strategic business units and across the corporate investment cycle. In contrast, the extant literature implicitly assumes that IT strategy is aligned with a single business strategy at a single point in time. This paper draws on resource-based theory and path dependence to model functional, structural, and temporal IT strategic alignment in MBOs. Drawing on Makadok’s theory of profit, we show how each form of alignment creates value through the three strategic drivers of competence, governance, and flexibility, respectively. We illustrate the model with examples from a case study on the Commonwealth Bank of Australia. We also explore the model’s implications for existing IT alignment models, providing alternative theoretical explanations for how IT alignment creates value. Journal of Information Technology (2015) 30, 101–118. doi:10.1057/jit.2015.1; published online 24 March 2015",
"title": ""
},
{
"docid": "b03273ada7d85d37e4c44f1195c9a450",
"text": "Nowadays the trend to solve optimization problems is to use s pecific algorithms rather than very general ones. The UNLocBoX provides a general framework allowing the user to design his own algorithms. To do so, the framework try to stay as close from the mathematical problem as possible. M ore precisely, the UNLocBoX is a Matlab toolbox designed to solve convex optimi zation problem of the form",
"title": ""
},
{
"docid": "48fffb441a5e7f304554e6bdef6b659e",
"text": "The massive accumulation of genome-sequences in public databases promoted the proliferation of genome-level phylogenetic analyses in many areas of biological research. However, due to diverse evolutionary and genetic processes, many loci have undesirable properties for phylogenetic reconstruction. These, if undetected, can result in erroneous or biased estimates, particularly when estimating species trees from concatenated datasets. To deal with these problems, we developed GET_PHYLOMARKERS, a pipeline designed to identify high-quality markers to estimate robust genome phylogenies from the orthologous clusters, or the pan-genome matrix (PGM), computed by GET_HOMOLOGUES. In the first context, a set of sequential filters are applied to exclude recombinant alignments and those producing anomalous or poorly resolved trees. Multiple sequence alignments and maximum likelihood (ML) phylogenies are computed in parallel on multi-core computers. A ML species tree is estimated from the concatenated set of top-ranking alignments at the DNA or protein levels, using either FastTree or IQ-TREE (IQT). The latter is used by default due to its superior performance revealed in an extensive benchmark analysis. In addition, parsimony and ML phylogenies can be estimated from the PGM. We demonstrate the practical utility of the software by analyzing 170 Stenotrophomonas genome sequences available in RefSeq and 10 new complete genomes of Mexican environmental S. maltophilia complex (Smc) isolates reported herein. A combination of core-genome and PGM analyses was used to revise the molecular systematics of the genus. An unsupervised learning approach that uses a goodness of clustering statistic identified 20 groups within the Smc at a core-genome average nucleotide identity (cgANIb) of 95.9% that are perfectly consistent with strongly supported clades on the core- and pan-genome trees. In addition, we identified 16 misclassified RefSeq genome sequences, 14 of them labeled as S. maltophilia, demonstrating the broad utility of the software for phylogenomics and geno-taxonomic studies. The code, a detailed manual and tutorials are freely available for Linux/UNIX servers under the GNU GPLv3 license at https://github.com/vinuesa/get_phylomarkers. A docker image bundling GET_PHYLOMARKERS with GET_HOMOLOGUES is available at https://hub.docker.com/r/csicunam/get_homologues/, which can be easily run on any platform.",
"title": ""
},
{
"docid": "d21308f9ffa990746c6be137964d2e12",
"text": "'Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers', This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "c2fc4e65c484486f5612f4006b6df102",
"text": "Although flat item category structure where categories are independent in a same level has been well studied to enhance recommendation performance, in many real applications, item category is often organized in hierarchies to reflect the inherent correlations among categories. In this paper, we propose a novel matrix factorization model by exploiting category hierarchy from the perspectives of users and items for effective recommendation. Specifically, a user (an item) can be influenced (characterized) by her preferred categories (the categories it belongs to) in the hierarchy. We incorporate how different categories in the hierarchy co-influence a user and an item. Empirical results show the superiority of our approach against other counterparts.",
"title": ""
},
{
"docid": "1ecbdb3a81e046452905105600b90780",
"text": "Identity-invariant estimation of head pose from still images is a challenging task due to the high variability of facial appearance. We present a novel 3D head pose estimation approach, which utilizes the flexibility and expressibility of a dense generative 3D facial model in combination with a very fast fitting algorithm. The efficiency of the head pose estimation is obtained by a 2D synthesis of the facial input image. This optimization procedure drives the appearance and pose of the 3D facial model. In contrast to many other approaches we are specifically interested in the more difficult task of head pose estimation from still images, instead of tracking faces in image sequences. We evaluate our approach on two publicly available databases (FacePix and USF HumanID) and compare our method to the 3D morphable model and other state of the art approaches in terms of accuracy and speed.",
"title": ""
},
{
"docid": "2ce36ce9de500ba2367b1af83ac3e816",
"text": "We examine whether the information content of the earnings report, as captured by the earnings response coefficient (ERC), increases when investors’ uncertainty about the manager’s reporting objectives decreases, as predicted in Fischer and Verrecchia (2000). We use the 2006 mandatory compensation disclosures as an instrument to capture a decrease in investors’ uncertainty about managers’ incentives and reporting objectives. Employing a difference-in-differences design and exploiting the staggered adoption of the new rules, we find a statistically and economically significant increase in ERC for treated firms relative to control firms, largely driven by profit firms. Cross-sectional tests suggest that the effect is more pronounced in subsets of firms most affected by the new rules. Our findings represent the first empirical evidence of a role of compensation disclosures in enhancing the information content of financial reports. JEL Classification: G38, G30, G34, M41",
"title": ""
},
{
"docid": "959ad8268836d34648a52c449f5de987",
"text": "There is widespread sentiment that fast gradient methods (e.g. Nesterov’s acceleration, conjugate gradient, heavy ball) are not effective for the purposes of stochastic optimization due to their instability and error accumulation. Numerous works have attempted to quantify these instabilities in the face of either statistical or non-statistical errors (Paige, 1971; Proakis, 1974; Polyak, 1987; Greenbaum, 1989; Roy and Shynk, 1990; Sharma et al., 1998; d’Aspremont, 2008; Devolder et al., 2014; Yuan et al., 2016). This work considers these issues for the special case of stochastic approximation for the least squares regression problem, and our main result refutes this conventional wisdom by showing that acceleration can be made robust to statistical errors. In particular, this work introduces an accelerated stochastic gradient method that provably achieves the minimax optimal statistical risk faster than stochastic gradient descent. Critical to the analysis is a sharp characterization of accelerated stochastic gradient descent as a stochastic process. We hope this characterization gives insights towards the broader question of designing simple and effective accelerated stochastic methods for more general convex and non-convex optimization problems.",
"title": ""
},
{
"docid": "3c33528735b53a4f319ce4681527c163",
"text": "Within the past two years, important advances have been made in modeling credit risk at the portfolio level. Practitioners and policy makers have invested in implementing and exploring a variety of new models individually. Less progress has been made, however, with comparative analyses. Direct comparison often is not straightforward, because the different models may be presented within rather different mathematical frameworks. This paper offers a comparative anatomy of two especially influential benchmarks for credit risk models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We then design simulation exercises which evaluate the effect of each of these differences individually. JEL Codes: G31, C15, G11 ∗The views expressed herein are my own and do not necessarily reflect those of the Board of Governors or its staff. I would like to thank David Jones for drawing my attention to this issue, and for his helpful comments. I am also grateful to Mark Carey for data and advice useful in calibration of the models, and to Chris Finger and Tom Wilde for helpful comments. Please address correspondence to the author at Division of Research and Statistics, Mail Stop 153, Federal Reserve Board, Washington, DC 20551, USA. Phone: (202)452-3705. Fax: (202)452-5295. Email: 〈[email protected]〉. Over the past decade, financial institutions have developed and implemented a variety of sophisticated models of value-at-risk for market risk in trading portfolios. These models have gained acceptance not only among senior bank managers, but also in amendments to the international bank regulatory framework. Much more recently, important advances have been made in modeling credit risk in lending portfolios. The new models are designed to quantify credit risk on a portfolio basis, and thus have application in control of risk concentration, evaluation of return on capital at the customer level, and more active management of credit portfolios. Future generations of today’s models may one day become the foundation for measurement of regulatory capital adequacy. Two of the models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+, have been released freely to the public since 1997 and have quickly become influential benchmarks. Practitioners and policy makers have invested in implementing and exploring each of the models individually, but have made less progress with comparative analyses. The two models are intended to measure the same risks, but impose different restrictions and distributional assumptions, and suggest different techniques for calibration and solution. Thus, given the same portfolio of credit exposures, the two models will, in general, yield differing evaluations of credit risk. Determining which features of the models account for differences in output would allow us a better understanding of the sensitivity of the models to the particular assumptions they employ. Unfortunately, direct comparison of the models is not straightforward, because the two models are presented within rather different mathematical frameworks. The CreditMetrics model is familiar to econometricians as an ordered probit model. Credit events are driven by movements in underlying unobserved latent variables. The latent variables are assumed to depend on external “risk factors.” Common dependence on the same risk factors gives rise to correlations in credit events across obligors. The CreditRisk+ model is based instead on insurance industry models of event risk. Instead of a latent variable, each obligor has a default probability. The default probabilities are not constant over time, but rather increase or decrease in response to background macroeconomic factors. To the extent that two obligors are sensitive to the same set of background factors, their default probabilities will move together. These co-movements in probability give rise to correlations in defaults. CreditMetrics and CreditRisk+ may serve essentially the same function, but they appear to be constructed quite differently. This paper offers a comparative anatomy of CreditMetrics and CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We can then design simulation exercises which evaluate the effect of these differences individually. We proceed as follows. Section 1 presents a summary of the CreditRisk+ model, and introduces a restricted version of CreditMetrics. The restrictions are imposed to facilitate direct comparison of CreditMetrics and CreditRisk+. While some of the richness of the full CreditMetrics implementation is sacrificed, the essential mathematical characteristics of the model are preserved. Our",
"title": ""
},
{
"docid": "56a072fc480c64e6a288543cee9cd5ac",
"text": "The performance of object detection has recently been significantly improved due to the powerful features learnt through convolutional neural networks (CNNs). Despite the remarkable success, there are still several major challenges in object detection, including object rotation, within-class diversity, and between-class similarity, which generally degenerate object detection performance. To address these issues, we build up the existing state-of-the-art object detection systems and propose a simple but effective method to train rotation-invariant and Fisher discriminative CNN models to further boost object detection performance. This is achieved by optimizing a new objective function that explicitly imposes a rotation-invariant regularizer and a Fisher discrimination regularizer on the CNN features. Specifically, the first regularizer enforces the CNN feature representations of the training samples before and after rotation to be mapped closely to each other in order to achieve rotation-invariance. The second regularizer constrains the CNN features to have small within-class scatter but large between-class separation. We implement our proposed method under four popular object detection frameworks, including region-CNN (R-CNN), Fast R- CNN, Faster R- CNN, and R- FCN. In the experiments, we comprehensively evaluate the proposed method on the PASCAL VOC 2007 and 2012 data sets and a publicly available aerial image data set. Our proposed methods outperform the existing baseline methods and achieve the state-of-the-art results.",
"title": ""
},
{
"docid": "7fd5f3461742db10503dd5e3d79fe3ed",
"text": "There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.",
"title": ""
},
{
"docid": "14032695043a1cc16239317e496bac35",
"text": "The rearing of bees is a quite difficult job since it requires experience and time. Beekeepers are used to take care of their bee colonies observing them and learning to interpret their behavior. Despite the rearing of bees represents one of the most antique human habits, nowadays bees risk the extinction principally because of the increasing pollution levels related to human activity. It is important to increase our knowledge about bees in order to develop new practices intended to improve their protection. These practices could include new technologies, in order to increase profitability of beekeepers and economical interest related to bee rearing, but also innovative rearing techniques, genetic selections, environmental politics and so on. Moreover bees, since they are very sensitive to pollution, are considered environmental indicators, and the research on bees could give important information about the conditions of soil, air and water. In this paper we propose a real hardware and software solution for apply the internet-of-things concept to bees in order to help beekeepers to improve their business and collect data for research purposes.",
"title": ""
},
{
"docid": "83195a7a81b58fb7c22b1bb1d806eb42",
"text": "We demonstrate high-performance, flexible, transparent heaters based on large-scale graphene films synthesized by chemical vapor deposition on Cu foils. After multiple transfers and chemical doping processes, the graphene films show sheet resistance as low as ∼43 Ohm/sq with ∼89% optical transmittance, which are ideal as low-voltage transparent heaters. Time-dependent temperature profiles and heat distribution analyses show that the performance of graphene-based heaters is superior to that of conventional transparent heaters based on indium tin oxide. In addition, we confirmed that mechanical strain as high as ∼4% did not substantially affect heater performance. Therefore, graphene-based, flexible, transparent heaters are expected to find uses in a broad range of applications, including automobile defogging/deicing systems and heatable smart windows.",
"title": ""
}
] |
scidocsrr
|
bb83e8b1e9d238b4483e1dd29c62e1ab
|
Tangential beam IMRT versus tangential beam 3D-CRT of the chest wall in postmastectomy breast cancer patients: A dosimetric comparison
|
[
{
"docid": "94ceacc37c20034658dae3008ed59ab2",
"text": "BACKGROUND\nIn early breast cancer, variations in local treatment that substantially affect the risk of locoregional recurrence could also affect long-term breast cancer mortality. To examine this relationship, collaborative meta-analyses were undertaken, based on individual patient data, of the relevant randomised trials that began by 1995.\n\n\nMETHODS\nInformation was available on 42,000 women in 78 randomised treatment comparisons (radiotherapy vs no radiotherapy, 23,500; more vs less surgery, 9300; more surgery vs radiotherapy, 9300). 24 types of local treatment comparison were identified. To help relate the effect on local (ie, locoregional) recurrence to that on breast cancer mortality, these were grouped according to whether or not the 5-year local recurrence risk exceeded 10% (<10%, 17,000 women; >10%, 25,000 women).\n\n\nFINDINGS\nAbout three-quarters of the eventual local recurrence risk occurred during the first 5 years. In the comparisons that involved little (<10%) difference in 5-year local recurrence risk there was little difference in 15-year breast cancer mortality. Among the 25,000 women in the comparisons that involved substantial (>10%) differences, however, 5-year local recurrence risks were 7% active versus 26% control (absolute reduction 19%), and 15-year breast cancer mortality risks were 44.6% versus 49.5% (absolute reduction 5.0%, SE 0.8, 2p<0.00001). These 25,000 women included 7300 with breast-conserving surgery (BCS) in trials of radiotherapy (generally just to the conserved breast), with 5-year local recurrence risks (mainly in the conserved breast, as most had axillary clearance and node-negative disease) 7% versus 26% (reduction 19%), and 15-year breast cancer mortality risks 30.5% versus 35.9% (reduction 5.4%, SE 1.7, 2p=0.0002; overall mortality reduction 5.3%, SE 1.8, 2p=0.005). They also included 8500 with mastectomy, axillary clearance, and node-positive disease in trials of radiotherapy (generally to the chest wall and regional lymph nodes), with similar absolute gains from radiotherapy; 5-year local recurrence risks (mainly at these sites) 6% versus 23% (reduction 17%), and 15-year breast cancer mortality risks 54.7% versus 60.1% (reduction 5.4%, SE 1.3, 2p=0.0002; overall mortality reduction 4.4%, SE 1.2, 2p=0.0009). Radiotherapy produced similar proportional reductions in local recurrence in all women (irrespective of age or tumour characteristics) and in all major trials of radiotherapy versus not (recent or older; with or without systemic therapy), so large absolute reductions in local recurrence were seen only if the control risk was large. To help assess the life-threatening side-effects of radiotherapy, the trials of radiotherapy versus not were combined with those of radiotherapy versus more surgery. There was, at least with some of the older radiotherapy regimens, a significant excess incidence of contralateral breast cancer (rate ratio 1.18, SE 0.06, 2p=0.002) and a significant excess of non-breast-cancer mortality in irradiated women (rate ratio 1.12, SE 0.04, 2p=0.001). Both were slight during the first 5 years, but continued after year 15. The excess mortality was mainly from heart disease (rate ratio 1.27, SE 0.07, 2p=0.0001) and lung cancer (rate ratio 1.78, SE 0.22, 2p=0.0004).\n\n\nINTERPRETATION\nIn these trials, avoidance of a local recurrence in the conserved breast after BCS and avoidance of a local recurrence elsewhere (eg, the chest wall or regional nodes) after mastectomy were of comparable relevance to 15-year breast cancer mortality. Differences in local treatment that substantially affect local recurrence rates would, in the hypothetical absence of any other causes of death, avoid about one breast cancer death over the next 15 years for every four local recurrences avoided, and should reduce 15-year overall mortality.",
"title": ""
}
] |
[
{
"docid": "86f0fa880f2a72cd3bf189132cc2aa44",
"text": "The advent of new technical solutions has offered a vast scope to encounter the existing challenges in tablet coating technology. One such outcome is the usage of innovative aqueous coating compositions to meet the limitations of organic based coating. The present study aimed at development of delayed release pantoprazole sodium tablets by coating with aqueous acrylic system belonging to methacrylic acid copolymer and to investigate the ability of the dosage form to protect the drug from acid milieu and to release rapidly in the duodenal pH. The core tablets were produced by direct compression using different disintegrants in variable concentrations. The physicochemical properties of all the tablets were consistent and satisfactory. Crosspovidone at 7.5% proved to be a better disintegrant with rapid disintegration with a minute, owing to its wicking properties. The optimized formulations were seal coated using HPMC dispersion to act as a barrier between the acid liable drug and enteric film coatings. The subcoating process was followed by enteric coating of tablets by the application of acryl-Eze at different theoretical weight gains. Enteric coated formulations were subjected to disintegration and dissolution tests by placing them in 0.1 N HCl for 2 h and then in pH 6.8 phosphate buffer for 1 h. The coated tablets remained static without peeling or cracking in the acid media, however instantly disintegrated in the intestinal pH. In the in vitro release studies, the optimized tablets released 0.16% in the acid media and 96% in the basic media which are well within the selected criteria. Results of the stability tests were satisfactory with the dissolution rate and assays were within acceptable limits. The results ascertained the acceptability of the aqueous based enteric coating composition for the successful development of delayed release, duodenal specific dosage forms for proton pump inhibitors.",
"title": ""
},
{
"docid": "ea1072f2972dbf15ef8c2d38704a0095",
"text": "The reliability of the microinverter is a very important feature that will determine the reliability of the ac-module photovoltaic (PV) system. Recently, many topologies and techniques have been proposed to improve its reliability. This paper presents a thorough study for different power decoupling techniques in single-phase microinverters for grid-tie PV applications. These power decoupling techniques are categorized into three groups in terms of the decoupling capacitor locations: 1) PV-side decoupling; 2) dc-link decoupling; and 3) ac-side decoupling. Various techniques and topologies are presented, compared, and scrutinized in scope of the size of decoupling capacitor, efficiency, and control complexity. Also, a systematic performance comparison is presented for potential power decoupling topologies and techniques.",
"title": ""
},
{
"docid": "1a2d9da5b42a7ae5a8dcf5fef48cfe26",
"text": "The space of bio-inspired hardware can be partitioned along three axes: phylogeny, ontogeny, and epigenesis. We refer to this as the POE model. Our Embryonics (for embryonic electronics) project is situated along the ontogenetic axis of the POE model and is inspired by the processes of molecular biology and by the embryonic development of living beings. We will describe the architecture of multicellular automata that are endowed with self-replication and self-repair properties. In the conclusion, we will present our major on-going project: a giant self-repairing electronic watch, the BioWatch, built on a new reconfigurable tissue, the electronic wall or e–wall.",
"title": ""
},
{
"docid": "da7f869037f40ab8666009d85d9540ff",
"text": "A boomerang-shaped alar base excision is described to narrow the nasal base and correct the excessive alar flare. The boomerang excision combined the external alar wedge resection with an internal vestibular floor excision. The internal excision was inclined 30 to 45 degrees laterally to form the inner limb of the boomerang. The study included 46 patients presenting with wide nasal base and excessive alar flaring. All cases were followed for a mean period of 18 months (range, 8 to 36 months). The laterally oriented vestibular floor excision allowed for maximum preservation of the natural curvature of the alar rim where it meets the nostril floor and upon its closure resulted in a considerable medialization of alar lobule, which significantly reduced the amount of alar flare and the amount of external alar excision needed. This external alar excision measured, on average, 3.8 mm (range, 2 to 8 mm), which is significantly less than that needed when a standard vertical internal excision was used ( P < 0.0001). Such conservative external excisions eliminated the risk of obliterating the natural alar-facial crease, which did not occur in any of our cases. No cases of postoperative bleeding, infection, or vestibular stenosis were encountered. Keloid or hypertrophic scar formation was not encountered; however, dermabrasion of the scars was needed in three (6.5%) cases to eliminate apparent suture track marks. The boomerang alar base excision proved to be a safe and effective technique for narrowing the nasal base and elimination of the excessive flaring and resulted in a natural, well-proportioned nasal base with no obvious scarring.",
"title": ""
},
{
"docid": "80ece123483d6de02c4e621bdb8eb0fc",
"text": "Resistive-switching memory (RRAM) based on transition metal oxides is a potential candidate for replacing Flash and dynamic random access memory in future generation nodes. Although very promising from the standpoints of scalability and technology, RRAM still has severe drawbacks in terms of understanding and modeling of the resistive-switching mechanism. This paper addresses the modeling of resistive switching in bipolar metal-oxide RRAMs. Reset and set processes are described in terms of voltage-driven ion migration within a conductive filament generated by electroforming. Ion migration is modeled by drift–diffusion equations with Arrhenius-activated diffusivity and mobility. The local temperature and field are derived from the self-consistent solution of carrier and heat conduction equations in a 3-D axis-symmetric geometry. The model accounts for set–reset characteristics, correctly describing the abrupt set and gradual reset transitions and allowing scaling projections for metal-oxide RRAM.",
"title": ""
},
{
"docid": "8e109bae5f59f84bb9b2ad88acfac446",
"text": "A proposal is made to use blockchain technology for recording contracts. A new protocol using the technology is described that makes it possible to confirm that contractor consent has been obtained and to archive the contractual document in the blockchain.",
"title": ""
},
{
"docid": "8c284159d0ba43f67c3c478763e7f200",
"text": "We develop a new graph-theoretic approach for pairwise data clustering which is motivated by the analogies between the intuitive concept of a cluster and that of a dominant set of vertices, a notion introduced here which generalizes that of a maximal complete subgraph to edge-weighted graphs. We establish a correspondence between dominant sets and the extrema of a quadratic form over the standard simplex, thereby allowing the use of straightforward and easily implementable continuous optimization techniques from evolutionary game theory. Numerical examples on various point-set and image segmentation problems confirm the potential of the proposed approach",
"title": ""
},
{
"docid": "8dc400d9745983da1e91f0cec70606c9",
"text": "Aspect-Oriented Programming (AOP) is intended to ease situations that involve many kinds of code tangling. This paper reports on a study to investigate AOP's ability to ease tangling related to exception detection and handling. We took an existing framework written in Java™, the JWAM framework, and partially reengineered its exception detection and handling aspects using AspectJ™, an aspect-oriented programming extension to Java.\nWe found that AspectJ supported implementations that drastically reduced the portion of the code related to exception detection and handling. In one scenario, we were able to reduce that code by a factor of 4. We also found that, with respect to the original implementation in plain Java, AspectJ provided better support for different configurations of exceptional behaviors, more tolerance for changes in the specifications of exceptional behaviors, better support for incremental development, better reuse, automatic enforcement of contracts in applications that use the framework, and cleaner program texts. We also found some weaknesses of AspectJ that should be addressed in the future.",
"title": ""
},
{
"docid": "81385958cac7df4cc51b35762e6c2806",
"text": "DDoS attacks remain a serious threat not only to the edge of the Internet but also to the core peering links at Internet Exchange Points (IXPs). Currently, the main mitigation technique is to blackhole traffic to a specific IP prefix at upstream providers. Blackholing is an operational technique that allows a peer to announce a prefix via BGP to another peer, which then discards traffic destined for this prefix. However, as far as we know there is only anecdotal evidence of the success of blackholing. Largely unnoticed by research communities, IXPs have deployed blackholing as a service for their members. In this first-of-its-kind study, we shed light on the extent to which blackholing is used by the IXP members and what effect it has on traffic. Within a 12 week period we found that traffic to more than 7, 864 distinct IP prefixes was blackholed by 75 ASes. The daily patterns emphasize that there are not only a highly variable number of new announcements every day but, surprisingly, there are a consistently high number of announcements (> 1000). Moreover, we highlight situations in which blackholing succeeds in reducing the DDoS attack traffic.",
"title": ""
},
{
"docid": "8da0d4884947d973a9121ea8f726ea61",
"text": "Soil and water pollution is becoming one of major burden in modern Indian society due to industrialization. Though there are many methods to remove the heavy metal from soil and water pollution but biosorption is one of the best scientific methods to remove heavy metal from water sample by using biomolecules and bacteria. Biosorbent have the ability to bind the heavy metal and therefore can remove from polluted water. Currently, we have taken the water sample from Ballendur Lake, Bangalore. Which is highly polluted due to industries besides this lake. This sample of water was serially diluted to 10-7. 10-4 and 10-5 diluted sample was allowed to stand in Tryptone Glucose Extract agar media mixed with the different concentrations of lead acetate for 24 hours. Microflora growth was observed. Then we cultured in different temperature, pH and different age of culture media. Finally, we did the biochemical test to identify the bacteria isolate and we found till genus level, it could be either Streptococcus sp. or Enterococcus sp.",
"title": ""
},
{
"docid": "2534ef0135eaba7e85a44c81c637adae",
"text": "k e 9 { Vol. 32, No. 1 2006 O 1 2 ACTA AUTOMATICA SINICA January, 2006 82 7? CPG , ? \". C4; 1) INH1, 3, 4 JKF1, 2 G D1 LME1 1(W\" z. jd8,EYpz\\\\ 110016) 2(u = Hz COE(Center of Excellence) <pE g. 525-8577 u ) ( Hz 110168) 4(W\" z. y . s 100039) (E-mail: [email protected]) < 5 ~ P}*}nFZqTf L4f1℄ ~< Q CPG \\? 4 \\( }nFZq uD6?j }nFZq?j |= E( CPG ?jmy 4f QT br 3<! E(ÆX FT A QT CPG -!adyu r1 <(m1T zy _ } }nFZq X ? r Z ~< Q y 4f & TP24",
"title": ""
},
{
"docid": "104c71324594c907f87d483c8c222f0f",
"text": "Operational controls are designed to support the integration of wind and solar power within microgrids. An aggregated model of renewable wind and solar power generation forecast is proposed to support the quantification of the operational reserve for day-ahead and real-time scheduling. Then, a droop control for power electronic converters connected to battery storage is developed and tested. Compared with the existing droop controls, it is distinguished in that the droop curves are set as a function of the storage state-of-charge (SOC) and can become asymmetric. The adaptation of the slopes ensures that the power output supports the terminal voltage while at the same keeping the SOC within a target range of desired operational reserve. This is shown to maintain the equilibrium of the microgrid's real-time supply and demand. The controls are implemented for the special case of a dc microgrid that is vertically integrated within a high-rise host building of an urban area. Previously untapped wind and solar power are harvested on the roof and sides of a tower, thereby supporting delivery to electric vehicles on the ground. The microgrid vertically integrates with the host building without creating a large footprint.",
"title": ""
},
{
"docid": "67bef3bbd769010e91548649eae454fa",
"text": "As networked and computer technologies continue to pervade all aspects of our lives, the threat from cyber attacks has also increased. However, detecting attacks, much less predicting them in advance, is a non-trivial task due to the anonymity of cyber attackers and the ambiguity of network data collected within an organization; often, by the time an attack pattern is recognized, the damage has already been done. Evidence suggests that the public discourse in external sources, such as news and social media, is often correlated with the occurrence of larger phenomena, such as election results or violent attacks. In this paper, we propose an approach that uses sentiment polarity as a sensor to analyze the social behavior of groups on social media as an indicator of cyber attack behavior. We developed an unsupervised sentiment prediction method that uses emotional signals to enhance the sentiment signal from sparse textual indicators. To explore the efficacy of sentiment polarity as an indicator of cyberattacks, we performed experiments using real-world data from Twitter that corresponds to attacks by a well-known hacktivist group.",
"title": ""
},
{
"docid": "a016fb3b7e5c4bcf386d775c7c61a887",
"text": "How do journalists mark quoted content as certain or uncertain, and how do readers interpret these signals? Predicates such as thinks, claims, and admits offer a range of options for framing quoted content according to the author’s own perceptions of its credibility. We gather a new dataset of direct and indirect quotes from Twitter, and obtain annotations of the perceived certainty of the quoted statements. We then compare the ability of linguistic and extra-linguistic features to predict readers’ assessment of the certainty of quoted content. We see that readers are indeed influenced by such framing devices — and we find no evidence that they consider other factors, such as the source, journalist, or the content itself. In addition, we examine the impact of specific framing devices on perceptions of credibility.",
"title": ""
},
{
"docid": "f81dd0c86a7b45e743e4be117b4030c2",
"text": "Stock market prediction is of great importance for financial analysis. Traditionally, many studies only use the news or numerical data for the stock market prediction. In the recent years, in order to explore their complementary, some studies have been conducted to equally treat dual sources of information. However, numerical data often play a much more important role compared with the news. In addition, the existing simple combination cannot exploit their complementarity. In this paper, we propose a numerical-based attention (NBA) method for dual sources stock market prediction. Our major contributions are summarized as follows. First, we propose an attention-based method to effectively exploit the complementarity between news and numerical data in predicting the stock prices. The stock trend information hidden in the news is transformed into the importance distribution of numerical data. Consequently, the news is encoded to guide the selection of numerical data. Our method can effectively filter the noise and make full use of the trend information in news. Then, in order to evaluate our NBA model, we collect news corpus and numerical data to build three datasets from two sources: the China Security Index 300 (CSI300) and the Standard & Poor’s 500 (S&P500). Extensive experiments are conducted, showing that our NBA is superior to previous models in dual sources stock price prediction.",
"title": ""
},
{
"docid": "faa3d0432cbade209fa876240c5db4c0",
"text": "BACKGROUND\nDespite the clinical importance of atrial fibrillation (AF), the development of chronic nonvalvular AF models has been difficult. Animal models of sustained AF have been developed primarily in the short-term setting. Recently, models of chronic ventricular myopathy and fibrillation have been developed after several weeks of continuous rapid ventricular pacing. We hypothesized that chronic rapid atrial pacing would lead to atrial myopathy, yielding a reproducible model of sustained AF.\n\n\nMETHODS AND RESULTS\nTwenty-two halothane-anesthetized mongrel dogs underwent insertion of a transvenous lead at the right atrial appendage that was continuously paced at 400 beats per minute for 6 weeks. Two-dimensional echocardiography was performed in 11 dogs to assess the effects of rapid atrial pacing on atrial size. Atrial vulnerability was defined as the ability to induce sustained repetitive atrial responses during programmed electrical stimulation and was assessed by extrastimulus and burst-pacing techniques. Effective refractory period (ERP) was measured at two endocardial sites in the right atrium. Sustained AF was defined as AF > or = 15 minutes. In animals with sustained AF, 10 quadripolar epicardial electrodes were surgically attached to the right and left atria. The local atrial fibrillatory cycle length (AFCL) was measured in a 20-second window, and the mean AFCL was measured at each site. Marked biatrial enlargement was documented; after 6 weeks of continuous rapid atrial pacing, the left atrium was 7.8 +/- 1 cm2 at baseline versus 11.3 +/- 1 cm2 after pacing, and the right atrium was 4.3 +/- 0.7 cm2 at baseline versus 7.2 +/- 1.3 cm2 after pacing. An increase in atrial area of at least 40% was necessary to induce sustained AF and was strongly correlated with the inducibility of AF (r = .87). Electron microscopy of atrial tissue demonstrated structural changes that were characterized by an increase in mitochondrial size and number and by disruption of the sarcoplasmic reticulum. After 6 weeks of continuous rapid atrial pacing, sustained AF was induced in 18 dogs (82%) and nonsustained AF was induced in 2 dogs (9%). AF occurred spontaneously in 4 dogs (18%). Right atrial ERP, measured at cycle lengths of 400 and 300 milliseconds at baseline, was significantly shortened after pacing, from 150 +/- 8 to 127 +/- 10 milliseconds and from 147 +/- 11 to 123 +/- 12 milliseconds, respectively (P < .001). This finding was highly predictive of inducibility of AF (90%). Increased atrial area (40%) and ERP shortening were highly predictive for the induction of sustained AF (88%). Local epicardial ERP correlated well with local AFCL (R2 = .93). Mean AFCL was significantly shorter in the left atrium (81 +/- 8 milliseconds) compared with the right atrium 94 +/- 9 milliseconds (P < .05). An area in the posterior left atrium was consistently found to have a shorter AFCL (74 +/- 5 milliseconds). Cryoablation of this area was attempted in 11 dogs. In 9 dogs (82%; mean, 9.0 +/- 4.0; range, 5 to 14), AF was terminated and no longer induced after serial cryoablation.\n\n\nCONCLUSIONS\nSustained AF was readily inducible in most dogs (82%) after rapid atrial pacing. This model was consistently associated with biatrial myopathy and marked changes in atrial vulnerability. An area in the posterior left atrium was uniformly shown to have the shortest AFCL. The results of restoration of sinus rhythm and prevention of inducibility of AF after cryoablation of this area of the left atrium suggest that this area may be critical in the maintenance of AF in this model.",
"title": ""
},
{
"docid": "7323cf16224197b312d1a4c7ff4168ea",
"text": "It is well known that animals can use neural and sensory feedback via vision, tactile sensing, and echolocation to negotiate obstacles. Similarly, most robots use deliberate or reactive planning to avoid obstacles, which relies on prior knowledge or high-fidelity sensing of the environment. However, during dynamic locomotion in complex, novel, 3D terrains, such as a forest floor and building rubble, sensing and planning suffer bandwidth limitation and large noise and are sometimes even impossible. Here, we study rapid locomotion over a large gap-a simple, ubiquitous obstacle-to begin to discover the general principles of the dynamic traversal of large 3D obstacles. We challenged the discoid cockroach and an open-loop six-legged robot to traverse a large gap of varying length. Both the animal and the robot could dynamically traverse a gap as large as one body length by bridging the gap with its head, but traversal probability decreased with gap length. Based on these observations, we developed a template that accurately captured body dynamics and quantitatively predicted traversal performance. Our template revealed that a high approach speed, initial body pitch, and initial body pitch angular velocity facilitated dynamic traversal, and successfully predicted a new strategy for using body pitch control that increased the robot's maximal traversal gap length by 50%. Our study established the first template of dynamic locomotion beyond planar surfaces, and is an important step in expanding terradynamics into complex 3D terrains.",
"title": ""
},
{
"docid": "4affe8335240844414a51355593bfbe0",
"text": "— This paper reviews and extends some recent results on the multivariate fractional Brownian motion (mfBm) and its increment process. A characterization of the mfBm through its covariance function is obtained. Similarly, the correlation and spectral analyses of the increments are investigated. On the other hand we show that (almost) all mfBm’s may be reached as the limit of partial sums of (super)linear processes. Finally, an algorithm to perfectly simulate the mfBm is presented and illustrated by some simulations. Résumé (Propriétés du mouvement brownien fractionnaire multivarié) Cet article constitue une synthèse des propriétés du mouvement brownien fractionnaire multivarié (mBfm) et de ses accroissements. Différentes caractérisations du mBfm sont présentées à partir soit de la fonction de covariance, soit de représentations intégrales. Nous étudions aussi les propriétés temporelles et spectrales du processus des accroissements. D’autre part, nous montrons que (presque) tous les mBfm peuvent être atteints comme la limite (au sens de la convergence faible) des sommes partielles de processus (super)linéaires. Enfin, un algorithme de simulation exacte est présenté et quelques simulations illustrent les propriétés du mBfm.",
"title": ""
},
{
"docid": "c35db6f50a6ca89d45172faf0332946a",
"text": "Mobile commerce had been expected to become a major force of e-commerce in the 21st century. However, the rhetoric has far exceeded the reality so far. While academics and practitioners have presented many views about the lack of rapid growth of mobile commerce, we submit that the anticipated mobile commerce take-off hinges on the emergence of a few killer apps. After reviewing the recent history of technologies that have dramatically changed our way of life and work, we propose a set of criteria for identifying and evaluating killer apps. From this vantage point, we argue that mobile payment and banking are the most likely candidates for the killer apps that could bring the expectation of a world of ubiquitous mobile commerce to fruition. Challenges and opportunities associated with this argument are discussed.",
"title": ""
},
{
"docid": "3c577fcd0d0876af4aa031affa3bd168",
"text": "Domain-specific Internet of Things (IoT) applications are becoming more and more popular. Each of these applications uses their own technologies and terms to describe sensors and their measurements. This is a difficult task to help users build generic IoT applications to combine several domains. To explicitly describe sensor measurements in uniform way, we propose to enrich them with semantic web technologies. Domain knowledge is already defined in more than 200 ontology and sensor-based projects that we could reuse to build cross-domain IoT applications. There is a huge gap to reason on sensor measurements without a common nomenclature and best practices to ease the automation of generic IoT applications. We present our Machine-to-Machine Measurement (M3) framework and share lessons learned to improve existing standards such as oneM2M, ETSI M2M, W3C Web of Things and W3C Semantic Sensor Network.",
"title": ""
}
] |
scidocsrr
|
09ac51c093547175df6b553cc17f7670
|
Drivable Road Detection with 3D Point Clouds Based on the MRF for Intelligent Vehicle
|
[
{
"docid": "3bc9e621a0cfa7b8791ae3fb94eff738",
"text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.",
"title": ""
}
] |
[
{
"docid": "be43b90cce9638b0af1c3143b6d65221",
"text": "Reasoning on provenance information and property propagation is of significant importance in e-science since it helps scientists manage derived metadata in order to understand the source of an object, reproduce results of processes and facilitate quality control of results and processes. In this paper we introduce a simple, yet powerful reasoning mechanism based on property propagation along the transitive part-of and derivation chains, in order to trace the provenance of an object and to carry useful inferences. We apply our reasoning in semantic repositories using the CIDOC-CRM conceptual schema and its extension CRMdig, which has been develop for representing the digital and empirical provenance of digi-",
"title": ""
},
{
"docid": "e85b761664a01273a10819566699bf4f",
"text": "Julius Bernstein belonged to the Berlin school of “organic physicists” who played a prominent role in creating modern physiology and biophysics during the second half of the nineteenth century. He trained under du Bois-Reymond in Berlin, worked with von Helmholtz in Heidelberg, and finally became Professor of Physiology at the University of Halle. Nowadays his name is primarily associated with two discoveries: (1) The first accurate description of the action potential in 1868. He developed a new instrument, a differential rheotome (= current slicer) that allowed him to resolve the exact time course of electrical activity in nerve and muscle and to measure its conduction velocity. (2) His ‘Membrane Theory of Electrical Potentials’ in biological cells and tissues. This theory, published by Bernstein in 1902, provided the first plausible physico-chemical model of bioelectric events; its fundamental concepts remain valid to this day. Bernstein pursued an intense and long-range program of research in which he achieved a new level of precision and refinement by formulating quantitative theories supported by exact measurements. The innovative design and application of his electromechanical instruments were milestones in the development of biomedical engineering techniques. His seminal work prepared the ground for hypotheses and experiments on the conduction of the nervous impulse and ultimately the transmission of information in the nervous system. Shortly after his retirement, Bernstein (1912) summarized his electrophysiological work and extended his theoretical concepts in a book Elektrobiologie that became a classic in its field. The Bernstein Centers for Computational Neuroscience recently established at several universities in Germany were named to honor the person and his work.",
"title": ""
},
{
"docid": "82a3fe6dfa81e425eb3aa3404799e72d",
"text": "ABSTRACT: Nonlinear control problem for a missile autopilot is quick adaptation and minimizing the desired acceleration to missile nonlinear model. For this several missile controllers are provided which are on the basis of nonlinear control or design of linear control for the linear missile system. In this paper a linear control of dynamic matrix type is proposed for the linear model of missile. In the first section, an approximate two degrees of freedom missile model, known as Horton model, is introduced. Then, the nonlinear model is converted into observable and controllable model base on the feedback linear rule of input-state mode type. Finally for design of control model, the dynamic matrix flight control, which is one of the linear predictive control design methods on the basis of system step response information, is used. This controller is a recursive method which calculates the development of system input by definition and optimization of a cost function and using system dynamic matrix. So based on the applied inputs and previous output information, the missile acceleration would be calculated. Unlike other controllers, this controller doesn’t require an interaction effect and accurate model. Although, it has predicting and controlling horizon, there isn’t such horizons in non-predictive methods.",
"title": ""
},
{
"docid": "c966c67c098e8178e6c05b6d446f6dd3",
"text": "Data are today an asset more critical than ever for all organizations we may think of. Recent advances and trends, such as sensor systems, IoT, cloud computing, and data analytics, are making possible to pervasively, efficiently, and effectively collect data. However for data to be used to their full power, data security and privacy are critical. Even though data security and privacy have been widely investigated over the past thirty years, today we face new difficult data security and privacy challenges. Some of those challenges arise from increasing privacy concerns with respect to the use of data and from the need of reconciling privacy with the use of data for security in applications such as homeland protection, counterterrorism, and health, food and water security. Other challenges arise because the deployments of new data collection and processing devices, such as those used in IoT systems, increase the data attack surface. In this paper, we discuss relevant concepts and approaches for data security and privacy, and identify research challenges that must be addressed by comprehensive solutions to data security and privacy.",
"title": ""
},
{
"docid": "c1a76ba2114ec856320651489ee9b28b",
"text": "The boost of available digital media has led to a significant increase in derivative work. With tools for manipulating objects becoming more and more mature, it can be very difficult to determine whether one piece of media was derived from another one or tampered with. As derivations can be done with malicious intent, there is an urgent need for reliable and easily usable tampering detection methods. However, even media considered semantically untampered by humans might have already undergone compression steps or light post-processing, making automated detection of tampering susceptible to false positives. In this paper, we present the PSBattles dataset which is gathered from a large community of image manipulation enthusiasts and provides a basis for media derivation and manipulation detection in the visual domain. The dataset consists of 102’028 images grouped into 11’142 subsets, each containing the original image as well as a varying number of manipulated derivatives.",
"title": ""
},
{
"docid": "e54c308623cb2a2f97e3075e572fdadb",
"text": "Augmented Reality is becoming increasingly popular. The success of a platform is typically observed by measuring the health of the software ecosystem surrounding it. In this paper, we take a closer look at the Vuforia ecosystem’s health by mining the Vuforia platform application repository. It is observed that the developer ecosystem is the strength of the platform. We also determine that Vuforia could be the biggest player in the market if they lay its focus on specific types of app",
"title": ""
},
{
"docid": "049a7164a973fb515ed033ba216ec344",
"text": "Modern vehicle fleets, e.g., for ridesharing platforms and taxi companies, can reduce passengers' waiting times by proactively dispatching vehicles to locations where pickup requests are anticipated in the future. Yet it is unclear how to best do this: optimal dispatching requires optimizing over several sources of uncertainty, including vehicles' travel times to their dispatched locations, as well as coordinating between vehicles so that they do not attempt to pick up the same passenger. While prior works have developed models for this uncertainty and used them to optimize dispatch policies, in this work we introduce a model-free approach. Specifically, we propose MOVI, a Deep Q-network (DQN)-based framework that directly learns the optimal vehicle dispatch policy. Since DQNs scale poorly with a large number of possible dispatches, we streamline our DQN training and suppose that each individual vehicle independently learns its own optimal policy, ensuring scalability at the cost of less coordination between vehicles. We then formulate a centralized receding-horizon control (RHC) policy to compare with our DQN policies. To compare these policies, we design and build MOVI as a large-scale realistic simulator based on 15 million taxi trip records that simulates policy-agnostic responses to dispatch decisions. We show that the DQN dispatch policy reduces the number of unserviced requests by 76% compared to without dispatch and 20% compared to the RHC approach, emphasizing the benefits of a model-free approach and suggesting that there is limited value to coordinating vehicle actions. This finding may help to explain the success of ridesharing platforms, for which drivers make individual decisions.",
"title": ""
},
{
"docid": "18d28769691fb87a6ebad5aae3eae078",
"text": "The current head Injury Assessment Reference Values (IARVs) for the child dummies are based in part on scaling adult and animal data and on reconstructions of real world accident scenarios. Reconstruction of well-documented accident scenarios provides critical data in the evaluation of proposed IARV values, but relatively few accidents are sufficiently documented to allow for accurate reconstructions. This reconstruction of a well documented fatal-fall involving a 23-month old child supplies additional data for IARV assessment. The videotaped fatal-fall resulted in a frontal head impact onto a carpet-covered cement floor. The child suffered an acute right temporal parietal subdural hematoma without skull fracture. The fall dynamics were reconstructed in the laboratory and the head linear and angular accelerations were quantified using the CRABI-18 Anthropomorphic Test Device (ATD). Peak linear acceleration was 125 ± 7 g (range 114-139), HIC15 was 335 ± 115 (Range 257-616), peak angular velocity was 57± 16 (Range 26-74), and peak angular acceleration was 32 ± 12 krad/s 2 (Range 15-56). The results of the CRABI-18 fatal fall reconstruction were consistent with the linear and rotational tolerances reported in the literature. This study investigates the usefulness of the CRABI-18 anthropomorphic testing device in forensic investigations of child head injury and aids in the evaluation of proposed IARVs for head injury. INTRODUCTION Defining the mechanisms of injury and the associated tolerance of the pediatric head to trauma has been the focus of a great deal of research and effort. In contrast to the multiple cadaver experimental studies of adult head trauma published in the literature, there exist only a few experimental studies of infant head injury using human pediatric cadaveric tissue [1-6]. While these few studies have been very informative, due to limitations in sample size, experimental equipment, and study objectives, current estimates of the tolerance of the pediatric head are based on relatively few pediatric cadaver data points combined with the use of scaled adult and animal data. In effort to assess and refine these tolerance estimates, a number of researchers have performed detailed accident reconstructions of well-documented injury scenarios [7-11] . The reliability of the reconstruction data are predicated on the ability to accurately reconstruct the actual accident and quantify the result in a useful injury metric(s). These resulting injury metrics can then be related to the injuries of the child and this, when combined with other reliable reconstructions, can form an important component in evaluating pediatric injury mechanisms and tolerance. Due to limitations in case identification, data collection, and resources, relatively few reconstructions of pediatric accidents have been performed. In this study, we report the results of the reconstruction of an uncharacteristically well documented fall resulting in a fatal head injury of a 23 month old child. The case study was previously reported as case #5 by Plunkett [12]. BACKGROUND As reported by Plunkett (2001), A 23-month-old was playing on a plastic gym set in the garage at her home with her older brother. She had climbed the attached ladder to the top rail above the platform and was straddling the rail, with her feet 0.70 meters (28 inches) above the floor. She lost her balance and fell headfirst onto a 1-cm (3⁄8-inch) thick piece of plush carpet remnant covering the concrete floor. She struck the carpet first with her outstretched hands, then with the right front side of her forehead, followed by her right shoulder. Her grandmother had been watching the children play and videotaped the fall. She cried after the fall but was alert",
"title": ""
},
{
"docid": "8d4288ddbdee91e934e6a98734285d1a",
"text": "Find loads of the designing social interfaces principles patterns and practices for improving the user experience book catalogues in this site as the choice of you visiting this page. You can also join to the website book library that will show you numerous books from any types. Literature, science, politics, and many more catalogues are presented to offer you the best book to find. The book that really makes you feels satisfied. Or that's the book that will save you from your job deadline.",
"title": ""
},
{
"docid": "160e06b33d6db64f38480c62989908fb",
"text": "A theoretical and experimental study has been performed on a low-profile, 2.4-GHz dipole antenna that uses a frequency-selective surface (FSS) with varactor-tuned unit cells. The tunable unit cell is a square patch with a small aperture on either side to accommodate the varactor diodes. The varactors are placed only along one dimension to avoid the use of vias and simplify the dc bias network. An analytical circuit model for this type of electrically asymmetric unit cell is shown. The measured data demonstrate tunability from 2.15 to 2.63 GHz with peak gains at broadside that range from 3.7- to 5-dBi and instantaneous bandwidths of 50 to 280 MHz within the tuning range. It is shown that tuning for optimum performance in the presence of a human-core body phantom can be achieved. The total antenna thickness is approximately λ/45.",
"title": ""
},
{
"docid": "572867885a16afc0af6a8ed92632a2a7",
"text": "We present an Efficient Log-based Troubleshooting(ELT) system for cloud computing infrastructures. ELT adopts a novel hybrid log mining approach that combines coarse-grained and fine-grained log features to achieve both high accuracy and low overhead. Moreover, ELT can automatically extract key log messages and perform invariant checking to greatly simplify the troubleshooting task for the system administrator. We have implemented a prototype of the ELT system and conducted an extensive experimental study using real management console logs of a production cloud system and a Hadoop cluster. Our experimental results show that ELT can achieve more efficient and powerful troubleshooting support than existing schemes. More importantly, ELT can find software bugs that cannot be detected by current cloud system management practice.",
"title": ""
},
{
"docid": "0c43c0dbeaff9afa0e73bddb31c7dac0",
"text": "A compact dual-band dielectric resonator antenna (DRA) using a parasitic c-slot fed by a microstrip line is proposed. In this configuration, the DR performs the functions of an effective radiator and the feeding structure of the parasitic c-slot in the ground plane. By optimizing the proposed structure parameters, the structure resonates at two different frequencies. One is from the DRA with the broadside patterns and the other from the c-slot with the dipole-like patterns. In order to determine the performance of varying design parameters on bandwidth and resonance frequency, the parametric study is carried out using simulation software High-Frequency Structure Simulator and experimental results. The measured and simulated results show excellent agreement.",
"title": ""
},
{
"docid": "1465b6c38296dfc46f8725dca5179cf1",
"text": "A brief introduction is given to the actual mechanics of simulated annealing, and a simple example from an IC layout is used to illustrate how these ideas can be applied. The complexities and tradeoffs involved in attacking a realistically complex design problem are illustrated by dissecting two very different annealing algorithms for VLSI chip floorplanning. Several current research problems aimed at determining more precisely how and why annealing algorithms work are examined. Some philosophical issues raised by the introduction of annealing are discussed.<<ETX>>",
"title": ""
},
{
"docid": "e72c88990ad5778eea9ce6dabace4326",
"text": "Studies in humans and rodents have suggested that behavior can at times be \"goal-directed\"-that is, planned, and purposeful-and at times \"habitual\"-that is, inflexible and automatically evoked by stimuli. This distinction is central to conceptions of pathological compulsion, as in drug abuse and obsessive-compulsive disorder. Evidence for the distinction has primarily come from outcome devaluation studies, in which the sensitivity of a previously learned behavior to motivational change is used to assay the dominance of habits versus goal-directed actions. However, little is known about how habits and goal-directed control arise. Specifically, in the present study we sought to reveal the trial-by-trial dynamics of instrumental learning that would promote, and protect against, developing habits. In two complementary experiments with independent samples, participants completed a sequential decision task that dissociated two computational-learning mechanisms, model-based and model-free. We then tested for habits by devaluing one of the rewards that had reinforced behavior. In each case, we found that individual differences in model-based learning predicted the participants' subsequent sensitivity to outcome devaluation, suggesting that an associative mechanism underlies a bias toward habit formation in healthy individuals.",
"title": ""
},
{
"docid": "cc5fae51afaac0119e3cac1cbdae722e",
"text": "The healthcare organization (hospitals, medical centers) should provide quality services at affordable costs. Quality of service implies diagnosing patients accurately and suggesting treatments that are effective. To achieve a correct and cost effective treatment, computer-based information and/or decision support Systems can be developed to full-fill the task. The generated information systems typically consist of large amount of data. Health care organizations must have ability to analyze these data. The Health care system includes data such as resource management, patient centric and transformed data. Data mining techniques are used to explore, analyze and extract these data using complex algorithms in order to discover unknown patterns. Many data mining techniques have been used in the diagnosis of heart disease with good accuracy. Neural Networks have shown great potential to be applied in the development of prediction system for various type of heart disease. This paper investigates the benefits and overhead of various neural network models for heart disease prediction.",
"title": ""
},
{
"docid": "a354f6c1d6411e4dec02031561c93ebd",
"text": "An operating system (OS) kernel is a critical software regarding to reliability and efficiency. Quality of modern OS kernels is already high enough. However, this is not the case for kernel modules, like, for example, device drivers that, due to various reasons, have a significantly lower level of quality. One of the most critical and widespread bugs in kernel modules are violations of rules for correct usage of a kernel API. One can find all such violations in modules or can prove their correctness using static verification tools that need contract specifications describing obligations of a kernel and modules relative to each other. This paper considers present methods and toolsets for static verification of kernel modules for different OSs. A new method for static verification of Linux kernel modules is proposed. This method allows one to configure the verification process at all its stages. It is shown how it can be adapted for checking kernel components of other OSs. An architecture of a configurable toolset for static verification of Linux kernel modules that implements the proposed method is described, and results of its practical application are presented. Directions for further development of the proposed method are discussed in conclusion.",
"title": ""
},
{
"docid": "8c29241ff4fd2f7c01043307a10c1726",
"text": "We are experiencing an abundance of Internet-of-Things (IoT) middleware solutions that provide connectivity for sensors and actuators to the Internet. To gain a widespread adoption, these middleware solutions, referred to as platforms, have to meet the expectations of different players in the IoT ecosystem, including device providers, application developers, and end-users, among others. In this article, we evaluate a representative sample of these platforms, both proprietary and open-source, on the basis of their ability to meet the expectations of different IoT users. The evaluation is thus more focused on how ready and usable these platforms are for IoT ecosystem players, rather than on the peculiarities of the underlying technological layers. The evaluation is carried out as a gap analysis of the current IoT landscape with respect to (i) the support for heterogeneous sensing and actuating technologies, (ii) the data ownership and its implications for security and privacy, (iii) data processing and data sharing capabilities, (iv) the support offered to application developers, (v) the completeness of an IoT ecosystem, and (vi) the availability of dedicated IoT marketplaces. The gap analysis aims to highlight the deficiencies of today’s solutions to improve their integration to tomorrow’s ecosystems. In order to strengthen the finding of our analysis, we conducted a survey among the partners of the Finnish IoT program, counting over 350 experts, to evaluate the most critical issues for the development of future IoT platforms. Based on the results of our analysis and our survey, we conclude this article with a list of recommendations for extending these IoT platforms in order to fill in the gaps.",
"title": ""
},
{
"docid": "9e4b7e87229dfb02c2600350899049be",
"text": "This paper presents an efficient and reliable swarm intelligence-based approach, namely elitist-mutated particle swarm optimization EMPSO technique, to derive reservoir operation policies for multipurpose reservoir systems. Particle swarm optimizers are inherently distributed algorithms, in which the solution for a problem emerges from the interactions between many simple individuals called particles. In this study the standard particle swarm optimization PSO algorithm is further improved by incorporating a new strategic mechanism called elitist-mutation to improve its performance. The proposed approach is first tested on a hypothetical multireservoir system, used by earlier researchers. EMPSO showed promising results, when compared with other techniques. To show practical utility, EMPSO is then applied to a realistic case study, the Bhadra reservoir system in India, which serves multiple purposes, namely irrigation and hydropower generation. To handle multiple objectives of the problem, a weighted approach is adopted. The results obtained demonstrate that EMPSO is consistently performing better than the standard PSO and genetic algorithm techniques. It is seen that EMPSO is yielding better quality solutions with less number of function evaluations. DOI: 10.1061/ ASCE 0733-9496 2007 133:3 192 CE Database subject headings: Reservoir operation; Optimization; Irrigation; Hydroelectric power generation.",
"title": ""
},
{
"docid": "11355807aa6b24f2eade366f391f0338",
"text": "Object detectors have hugely profited from moving towards an end-to-end learning paradigm: proposals, fea tures, and the classifier becoming one neural network improved results two-fold on general object detection. One indispensable component is non-maximum suppression (NMS), a post-processing algorithm responsible for merging all detections that belong to the same object. The de facto standard NMS algorithm is still fully hand-crafted, suspiciously simple, and — being based on greedy clustering with a fixed distance threshold — forces a trade-off between recall and precision. We propose a new network architecture designed to perform NMS, using only boxes and their score. We report experiments for person detection on PETS and for general object categories on the COCO dataset. Our approach shows promise providing improved localization and occlusion handling.",
"title": ""
},
{
"docid": "d8fc658756c4dd826b90a7e126e2e44d",
"text": "Knowledge graph embedding refers to projecting entities and relations in knowledge graph into continuous vector spaces. State-of-the-art methods, such as TransE, TransH, and TransR build embeddings by treating relation as translation from head entity to tail entity. However, previous models can not deal with reflexive/one-to-many/manyto-one/many-to-many relations properly, or lack of scalability and efficiency. Thus, we propose a novel method, flexible translation, named TransF, to address the above issues. TransF regards relation as translation between head entity vector and tail entity vector with flexible magnitude. To evaluate the proposed model, we conduct link prediction and triple classification on benchmark datasets. Experimental results show that our method remarkably improve the performance compared with several state-of-the-art baselines.",
"title": ""
}
] |
scidocsrr
|
31e27d53a3fe6dfbe288783e4d26c06c
|
Enterprise Cloud Service Architecture
|
[
{
"docid": "84cb130679353dbdeff24100409f57fe",
"text": "Cloud computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for cloud computing and there seems to be no consensus on what a cloud is. On the other hand, cloud computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established grid computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast cloud computing with grid computing from various angles and give insights into the essential characteristics of both.",
"title": ""
}
] |
[
{
"docid": "6d6e3b9ae698aca9981dc3b6dfb11985",
"text": "Several recent papers have tried to address the genetic determination of eye colour via microsatellite linkage, testing of pigmentation candidate gene polymorphisms and the genome wide analysis of SNP markers that are informative for ancestry. These studies show that the OCA2 gene on chromosome 15 is the major determinant of brown and/or blue eye colour but also indicate that other loci will be involved in the broad range of hues seen in this trait in Europeans.",
"title": ""
},
{
"docid": "4c5dd43f350955b283f1a04ddab52d41",
"text": "This thesis deals with interaction design for a class of upcoming computer technologies for human use characterized by being different from traditional desktop computers in their physical appearance and the contexts in which they are used. These are typically referred to as emerging technologies. Emerging technologies often imply interaction dissimilar from how computers are usually operated. This challenges the scope and applicability of existing knowledge about human-computer interaction design. The thesis focuses on three specific technologies: virtual reality, augmented reality and mobile computer systems. For these technologies, five themes are addressed: current focus of research, concepts, interaction styles, methods and tools. These themes inform three research questions, which guide the conducted research. The thesis consists of five published research papers and a summary. In the summary, current focus of research is addressed from the perspective of research methods and research purpose. Furthermore, the notions of human-computer interaction design and emerging technologies are discussed and two central distinctions are introduced. Firstly, interaction design is divided into two categories with focus on systems and processes respectively. Secondly, the three studied emerging technologies are viewed in relation to immersion into virtual space and mobility in physical space. These distinctions are used to relate the five paper contributions, each addressing one of the three studied technologies with focus on properties of systems or the process of creating them respectively. Three empirical sources contribute to the results. Experiments with interaction design inform the development of concepts and interaction styles suitable for virtual reality, augmented reality and mobile computer systems. Experiments with designing interaction inform understanding of how methods and tools support design processes for these technologies. Finally, a literature survey informs a review of existing research, and identifies current focus, limitations and opportunities for future research. The primary results of the thesis are: 1) Current research within human-computer interaction design for the studied emerging technologies focuses on building systems ad-hoc and evaluating them in artificial settings. This limits the generation of cumulative theoretical knowledge. 2) Interaction design for the emerging technologies studied requires the development of new suitable concepts and interaction styles. Suitable concepts describe unique properties and challenges of a technology. Suitable interaction styles respond to these challenges by exploiting the technology’s unique properties. 3) Designing interaction for the studied emerging technologies involves new use situations, a distance between development and target platforms and complex programming. Elements of methods exist, which are useful for supporting the design of interaction, but they are fragmented and do not support the process as a whole. The studied tools do not support the design process as a whole either but support aspects of interaction design by bridging the gulf between development and target platforms and providing advanced programming environments. Menneske-maskine interaktionsdesign for opkommende teknologier Virtual Reality, Augmented Reality og Mobile Computersystemer",
"title": ""
},
{
"docid": "28fd803428e8f40a4627e05a9464e97b",
"text": "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.",
"title": ""
},
{
"docid": "ec6b1d26b06adc99092659b4a511da44",
"text": "Social identity threat is the notion that one of a person's many social identities may be at risk of being devalued in a particular context (C. M. Steele, S. J. Spencer, & J. Aronson, 2002). The authors suggest that in domains in which women are already negatively stereotyped, interacting with a sexist man can trigger social identity threat, undermining women's performance. In Study 1, male engineering students who scored highly on a subtle measure of sexism behaved in a dominant and sexually interested way toward an ostensible female classmate. In Studies 2 and 3, female engineering students who interacted with such sexist men, or with confederates trained to behave in the same way, performed worse on an engineering test than did women who interacted with nonsexist men. Study 4 replicated this finding and showed that women's underperformance did not extend to an English test, an area in which women are not negatively stereotyped. Study 5 showed that interacting with sexist men leads women to suppress concerns about gender stereotypes, an established mechanism of stereotype threat. Discussion addresses implications for social identity threat and for women's performance in school and at work.",
"title": ""
},
{
"docid": "4e530e55fffbf5e0bc465a7cf378d148",
"text": "We describe a project to link the Princeton WordNet to 3D representations of real objects and scenes. The goal is to establish a dataset that helps us to understand how people categorize everyday common objects via their parts, attributes, and context. This paper describes the annotation and data collection effort so far as well as ideas for future work.",
"title": ""
},
{
"docid": "7d84e574d2a6349a9fc2669fdbe08bba",
"text": "Domain-specific languages (DSLs) provide high-level and domain-specific abstractions that allow expressive and concise algorithm descriptions. Since the description in a DSL hides also the properties of the target hardware, DSLs are a promising path to target different parallel and heterogeneous hardware from the same algorithm description. In theory, the DSL description can capture all characteristics of the algorithm that are required to generate highly efficient parallel implementations. However, most frameworks do not make use of this knowledge and the performance cannot reach that of optimized library implementations. In this article, we present the HIPAcc framework, a DSL and source-to-source compiler for image processing. We show that domain knowledge can be captured in the language and that this knowledge enables us to generate tailored implementations for a given target architecture. Back ends for CUDA, OpenCL, and Renderscript allow us to target discrete graphics processing units (GPUs) as well as mobile, embedded GPUs. Exploiting the captured domain knowledge, we can generate specialized algorithm variants that reach the maximal achievable performance due to the peak memory bandwidth. These implementations outperform state-of-the-art domain-specific languages and libraries significantly.",
"title": ""
},
{
"docid": "b12619b74b84dcc48af3e07313771c8b",
"text": "Domain adaptation is important in sentiment analysis as sentiment-indicating words vary between domains. Recently, multi-domain adaptation has become more pervasive, but existing approaches train on all available source domains including dissimilar ones. However, the selection of appropriate training data is as important as the choice of algorithm. We undertake – to our knowledge for the first time – an extensive study of domain similarity metrics in the context of sentiment analysis and propose novel representations, metrics, and a new scope for data selection. We evaluate the proposed methods on two largescale multi-domain adaptation settings on tweets and reviews and demonstrate that they consistently outperform strong random and balanced baselines, while our proposed selection strategy outperforms instance-level selection and yields the best score on a large reviews corpus. All experiments are available at url_redacted1",
"title": ""
},
{
"docid": "08dbd88adb399721e0f5ee91534c9888",
"text": "Many theories of attention have proposed that visual working memory plays an important role in visual search tasks. The present study examined the involvement of visual working memory in search using a dual-task paradigm in which participants performed a visual search task while maintaining no, two, or four objects in visual working memory. The presence of a working memory load added a constant delay to the visual search reaction times, irrespective of the number of items in the visual search array. That is, there was no change in the slope of the function relating reaction time to the number of items in the search array, indicating that the search process itself was not slowed by the memory load. Moreover, the search task did not substantially impair the maintenance of information in visual working memory. These results suggest that visual search requires minimal visual working memory resources, a conclusion that is inconsistent with theories that propose a close link between attention and working memory.",
"title": ""
},
{
"docid": "3e845c9a82ef88c7a1f4447d57e35a3e",
"text": "Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a “heuristic” that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.",
"title": ""
},
{
"docid": "a425425658207587c079730a68599572",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstoLorg/aboutiterms.html. JSTOR's Terms and Conditions ofDse provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. Operations Research is published by INFORMS. Please contact the publisher for further permissions regarding the use of this work. Publisher contact information may be obtained at http://www.jstor.org/jowllalslinforms.html.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "50d0ee6100a5678620d12217a2a72184",
"text": "1. Identify 5 feedback systems that you encounter in your everyday environment. For each system, identify the sensing mechanism, actuation mechanism, and control law. Describe the uncertainty that the feedback system provides robustness with respect to and/or the dynamics that are changed through the use of feedback. At least one example should correspond to a system that comes from your own discipline or research activities.",
"title": ""
},
{
"docid": "1f1958a2b1a83fecc4a3cc9223d151e5",
"text": "We present acoustic barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. We conclude with several example applications that highlight the utility of our approach, and a user study that explores its feasibility.",
"title": ""
},
{
"docid": "4125dba64f9d693a8b89854ee712eca5",
"text": "Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. To train our network, we use 1,132 240-fps video clips, containing 300K individual video frames. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.",
"title": ""
},
{
"docid": "4434ad83cad1b8dc353f24fdf12a606c",
"text": "Open source tools have recently reached a level of maturity which makes them suitable for building large-scale real-world systems. At the same time, the field of machine learning has developed a large body of powerful learning algorithms for diverse applications. However, the true potential of these methods is not used, since existing implementations are not openly shared, resulting in software with low usability, and weak interoperability. We argue that this situation can be significantly improved by increasing incentives for researchers to publish their software under an open source model. Additionally, we outline the problems authors are faced with when trying to publish algorithmic implementations of machine learning methods. We believe that a resource of peer reviewed software accompanied by short articles would be highly valuable to both the machine learning and the general scientific community.",
"title": ""
},
{
"docid": "cf18799eeaf3c5f2b344c1bbbc15da7f",
"text": "This paper presents a machine-learning classifier where the computation is performed within a standard 6T SRAM array. This eliminates explicit memory operations, which otherwise pose energy/performance bottlenecks, especially for emerging algorithms (e.g., from machine learning) that result in high ratio of memory accesses. We present an algorithm and prototype IC (in 130nm CMOS), where a 128×128 SRAM array performs storage of classifier models and complete classifier computations. We demonstrate a real application, namely digit recognition from MNIST-database images. The accuracy is equal to a conventional (ideal) digital/SRAM system, yet with 113× lower energy. The approach achieves accuracy >95% with a full feature set (i.e., 28×28=784 image pixels), and 90% when reduced to 82 features (as demonstrated on the IC due to area limitations). The energy per 10-way digit classification is 633pJ at a speed of 50MHz.",
"title": ""
},
{
"docid": "f599668745fd60d907deca91026d48da",
"text": "While Bregman divergences have been used for clustering and embedding problems in recent years, the facts that they are asymmetric and do not satisfy triangle inequality have been a major concern. In this paper, we investigate the relationship between two families of symmetrized Bregman divergences and metrics, which satisfy the triangle inequality. The first family can be derived from any well-behaved convex function under clearly quantified conditions. The second family generalizes the Jensen-Shannon divergence, and can only be derived from convex functions with certain conditional positive definiteness structure. We interpret the required structure in terms of cumulants of infinitely divisible distributions, and related results in harmonic analysis. We investigate kmeans-type clustering problems using both families of symmetrized divergences, and give efficient algorithms for the same",
"title": ""
},
{
"docid": "c1c9730b191f2ac9186ac704fd5b929f",
"text": "This paper reports on the results of a survey of user interface programming. The survey was widely distributed, and we received 74 responses. The results show that in today's applications, an average of 48% of the code is devoted to the user interface portion. The average time spent on the user interface portion is 45% during the design phase, 50% during the implementation phase, and 37% during the maintenance phase. 34% of the systems were implemented using a toolkit, 27% used a UIMS, 14% used an interface builder, and 26% used no tools. This appears to be because the toolkit systems had more sophisticated user interfaces. The projects using UIMSs or interface builders spent the least percent of time and code on the user interface (around 41%) suggesting that these tools are effective. In general, people were happy with the tools they used, especially the graphical interface builders. The most common problems people reported when developing a user interface included getting users' requirements, writing help text, achieving consistency, learning how to use the tools, getting acceptable performance, and communicating among various parts of the program.",
"title": ""
},
{
"docid": "691a24c16b926378d5c586c7f2b1ce22",
"text": "Isolated 7p22.3p22.2 deletions are rarely described with only two reports in the literature. Most other reported cases either involve a much larger region of the 7p arm or have an additional copy number variation. Here, we report five patients with overlapping microdeletions at 7p22.3p22.2. The patients presented with variable developmental delays, exhibiting relative weaknesses in expressive language skills and relative strengths in gross, and fine motor skills. The most consistent facial features seen in these patients included a broad nasal root, a prominent forehead a prominent glabella and arched eyebrows. Additional variable features amongst the patients included microcephaly, metopic ridging or craniosynostosis, cleft palate, cardiac defects, and mild hypotonia. Although the patients' deletions varied in size, there was a 0.47 Mb region of overlap which contained 7 OMIM genes: EIP3B, CHST12, LFNG, BRAT1, TTYH3, AMZ1, and GNA12. We propose that monosomy of this region represents a novel microdeletion syndrome. We recommend that individuals with 7p22.3p22.2 deletions should receive a developmental assessment and a thorough cardiac exam, with consideration of an echocardiogram, as part of their initial evaluation.",
"title": ""
},
{
"docid": "b1dd830adf87c283ff58630eade75b3c",
"text": "Self-control is a central function of the self and an important key to success in life. The exertion of self-control appears to depend on a limited resource. Just as a muscle gets tired from exertion, acts of self-control cause short-term impairments (ego depletion) in subsequent self-control, even on unrelated tasks. Research has supported the strength model in the domains of eating, drinking, spending, sexuality, intelligent thought, making choices, and interpersonal behavior. Motivational or framing factors can temporarily block the deleterious effects of being in a state of ego depletion. Blood glucose is an important component of the energy. KEYWORDS—self-control; ego depletion; willpower; impulse; strength Every day, people resist impulses to go back to sleep, to eat fattening or forbidden foods, to say or do hurtful things to their relationship partners, to play instead of work, to engage in inappropriate sexual or violent acts, and to do countless other sorts of problematic behaviors—that is, ones that might feel good immediately or be easy but that carry long-term costs or violate the rules and guidelines of proper behavior. What enables the human animal to follow rules and norms prescribed by society and to resist doing what it selfishly wants? Self-control refers to the capacity for altering one’s own responses, especially to bring them into line with standards such as ideals, values, morals, and social expectations, and to support the pursuit of long-term goals. Many writers use the terms selfcontrol and self-regulation interchangeably, but those whomake a distinction typically consider self-control to be the deliberate, conscious, effortful subset of self-regulation. In contrast, homeostatic processes such as maintaining a constant body temperature may be called self-regulation but not self-control. Self-control enables a person to restrain or override one response, thereby making a different response possible. Self-control has attracted increasing attention from psychologists for two main reasons. At the theoretical level, self-control holds important keys to understanding the nature and functions of the self. Meanwhile, the practical applications of self-control have attracted study in many contexts. Inadequate self-control has been linked to behavioral and impulse-control problems, including overeating, alcohol and drug abuse, crime and violence, overspending, sexually impulsive behavior, unwanted pregnancy, and smoking (e.g., Baumeister, Heatherton, & Tice, 1994; Gottfredson & Hirschi, 1990; Tangney, Baumeister, & Boone, 2004; Vohs & Faber, 2007). It may also be linked to emotional problems, school underachievement, lack of persistence, various failures at task performance, relationship problems and dissolution, and more.",
"title": ""
}
] |
scidocsrr
|
f9aec2b293d3af1446f75f0b6b5dd0f7
|
Truthful Online Auction for Cloud Instance Subletting
|
[
{
"docid": "26cfea93f837197e3244f771526d2fe7",
"text": "The payof matrix of the numberdistance game is as folow. We know that each player is invariant to the diferent actions in her support. First we guessed that al of the actions are in supports for both players. Let x,y,z be the probability that the first players plays 1,0,2 respectively and Let p,q,r be the probability that the second players plays 1,0,2 respectively. For the first player we have: 0*p+1*q+3*r = 1*p+0*q+2*r = 3*p+2*q+0*r p+q+r=1",
"title": ""
},
{
"docid": "1700ee1ba5fef2c9efa9a2b8bfa7d6bd",
"text": "This work studies resource allocation in a cloud market through the auction of Virtual Machine (VM) instances. It generalizes the existing literature by introducing combinatorial auctions of heterogeneous VMs, and models dynamic VM provisioning. Social welfare maximization under dynamic resource provisioning is proven NP-hard, and modeled with a linear integer program. An efficient α-approximation algorithm is designed, with α ~ 2.72 in typical scenarios. We then employ this algorithm as a building block for designing a randomized combinatorial auction that is computationally efficient, truthful in expectation, and guarantees the same social welfare approximation factor α. A key technique in the design is to utilize a pair of tailored primal and dual LPs for exploiting the underlying packing structure of the social welfare maximization problem, to decompose its fractional solution into a convex combination of integral solutions. Empirical studies driven by Google Cluster traces verify the efficacy of the randomized auction.",
"title": ""
}
] |
[
{
"docid": "e93f4f5c5828a7e82819964bbd29f8d4",
"text": "BACKGROUND\nAlthough hyaluronic acid (HA) specifications such as molecular weight and particle size are fairly well characterized, little information about HA ultrastructural and morphologic characteristics has been reported in clinical literature.\n\n\nOBJECTIVE\nTo examine uniformity of HA structure, the effects of extrusion, and lidocaine dilution of 3 commercially available HA soft-tissue fillers.\n\n\nMATERIALS AND METHODS\nUsing scanning electron microscopy and energy-dispersive x-ray analysis, investigators examined the soft-tissue fillers at various magnifications for ultrastructural detail and elemental distributions.\n\n\nRESULTS\nAll HAs contained oxygen, carbon, and sodium, but with uneven distributions. Irregular particulate matter was present in RES but BEL and JUV were largely particle free. Spacing was more uniform in BEL than JUV and JUV was more uniform than RES. Lidocaine had no apparent effect on morphology; extrusion through a 30-G needle had no effect on ultrastructure.\n\n\nCONCLUSION\nDescriptions of the ultrastructural compositions and nature of BEL, JUV, and RES are helpful for matching the areas to be treated with the HA soft-tissue filler architecture. Lidocaine and extrusion through a 30-G needle exerted no influence on HA structure. Belotero Balance shows consistency throughout the syringe and across manufactured lots.",
"title": ""
},
{
"docid": "c31dbdee3c36690794f3537c61cfc1e3",
"text": "Shape memory alloy (SMA) actuators, which have ability to return to a predetermined shape when heated, have many potential applications in aeronautics, surgical tools, robotics and so on. Although the number of applications is increasing, there has been limited success in precise motion control since the systems are disturbed by unknown factors beside their inherent nonlinear hysteresis or the surrounding environment of the systems is changed. This paper presents a new development of SMA position control system by using self-tuning fuzzy PID controller. The use of this control algorithm is to tune the parameters of the PID controller by integrating fuzzy inference and producing a fuzzy adaptive PID controller that can be used to improve the control performance of nonlinear systems. The experimental results of position control of SMA actuators using conventional and self tuning fuzzy PID controller are both included in this paper",
"title": ""
},
{
"docid": "ac9b06cbff27f1d370df33e1398c9942",
"text": "The purpose of this experiment was to examine the effect of web page text/background colour combination on readability, retention, aesthetics, and behavioural intention. One hundred and thirty-six participants studied two Web pages, one with educational content and one with commercial content, in one of four colour-combination conditions. Major findings were: (a) Colours with greater contrast ratio generally lead to greater readability; (b) colour combination did not significantly affect retention; (c) preferred colours (i.e., blues and chromatic colours) led to higher ratings of aesthetic quality and intention to purchase; and (d) ratings of aesthetic quality were significantly related to intention to purchase.",
"title": ""
},
{
"docid": "ffb7b58d947aa15cd64efbadb0f9543d",
"text": "A multi-armed bandit is an experiment with the goal of accumulating rewards from a payoff distribution with unknown parameters that are to be learned sequentially. This article describes a heuristic for managing multi-armed bandits called randomized probability matching, which randomly allocates observations to arms according the Bayesian posterior probability that each arm is optimal. Advances in Bayesian computation have made randomized probability matching easy to apply to virtually any payoff distribution. This flexibility frees the experimenter to work with payoff distributions that correspond to certain classical experimental designs that have the potential to outperform methods that are ‘optimal’ in simpler contexts. I summarize the relationships between randomized probability matching and several related heuristics that have been used in the reinforcement learning literature. Copyright q 2010 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "5404c00708c64d9f254c25f0065bc13c",
"text": "In this paper, we discuss the problem of automatic skin lesion analysis, specifically melanoma detection and semantic segmentation. We accomplish this by using deep learning techniques to perform classification on publicly available dermoscopic images. Skin cancer, of which melanoma is a type, is the most prevalent form of cancer in the US and more than four million cases are diagnosed in the US every year. In this work, we present our efforts towards an accessible, deep learning-based system that can be used for skin lesion classification, thus leading to an improved melanoma screening system. For classification, a deep convolutional neural network architecture is first implemented over the raw images. In addition, hand-coded features such as 166-D color histogram distribution, edge histogram and Multiscale Color local binary patterns are extracted from the images and presented to a random forest classifier. The average of the outputs from the two mentioned classifiers is taken as the final classification result. The classification task achieves an accuracy of 80.3%, AUC score of 0.69 and a precision score of 0.81. For segmentation, we implement a convolutional-deconvolutional architecture and the segmentation model achieves a Dice coefficient of 73.5%.",
"title": ""
},
{
"docid": "82a9ee145c1dbec711baf8c0b3c11715",
"text": "In order to improve the performance of internet public sentiment analysis, a text sentiment analysis method combining Latent Dirichlet Allocation (LDA) text representation and convolutional neural network (CNN) is proposed. First, the review texts are collected from the network for preprocessing. Then, using the LDA topic model to train the latent semantic space representation (topic distribution) of the short text, and the short text feature vector representation based on the topic distribution is constructed. Finally, the CNN with gated recurrent unit (GRU) is used as a classifier. According to the input feature matrix, the GRU-CNN strengthens the relationship between words and words, text and text, so as to achieve high accurate text classification. The simulation results show that this method can effectively improve the accuracy of text sentiment classification.",
"title": ""
},
{
"docid": "4ebe344a72053aef8ed19e3da139bb10",
"text": "Construction industry faces a lot of inherent uncertainties and issues. As this industry is plagued by risk, risk management is an important part of the decision-making process of these companies. Risk assessment is the critical procedure of risk management. Despite many scholars and practitioners recognizing the risk assessment models in projects, insufficient attention has been paid by researchers to select the suitable risk assessment model. In general, many factors affect this problem which adheres to uncertain and imprecise data and usually several people are involved in the selection process. Using the fuzzy TOPSIS method, this study provides a rational and systematic process for developing the best model under each of the selection criteria. Decision criteria are obtained from the nominal group technique (NGT). The proposed method can discriminate successfully and clearly among risk assessment methods. The proposed approach is demonstrated using a real case involving an Iranian construction corporation. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "894e945c9bb27f5464d1b8f119139afc",
"text": "Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p = .0012) for our model C=0.75 (95% CI: 0.70 - 0.79) than the human benchmark of C=0.59 (95% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival.",
"title": ""
},
{
"docid": "b50918f904d08f678cb153b16b052344",
"text": "According to Earnshaw's theorem, the ratio between axial and radial stiffness is always -2 for pure permanent magnetic configurations with rotational symmetry. Using highly permeable material increases the force and stiffness of permanent magnetic bearings. However, the stiffness in the unstable direction increases more than the stiffness in the stable direction. This paper presents an analytical approach to calculating the axial force and the axial and radial stiffnesses of attractive passive magnetic bearings (PMBs) with back iron. The investigations are based on the method of image charges and show in which magnet geometries lead to reasonable axial to radial stiffness ratios. Furthermore, the magnet dimensions achieving maximum force and stiffness per magnet volume are outlined. Finally, the calculation method was applied to the PMB of a magnetically levitated fan, and the analytical results were compared with a finite element analysis.",
"title": ""
},
{
"docid": "40bb1cb654b67ecd9f29cb47328ff2cd",
"text": "Detecting fraudulent financial statements (FFS) is critical in order to protect the global financial market. In recent years, FFS have begun to appear and continue to grow rapidly, which has shocked the confidence of investors and threatened the economics of entire countries. While auditors are the last line of defense to detect FFS, many auditors lack the experience and expertise to deal with the related risks. This study introduces a support vector machine-based fraud warning (SVMFW) model to reduce these risks. The model integrates sequential forward selection (SFS), support vector machine (SVM), and a classification and regression tree (CART). SFS is employed to overcome information overload problems, and the SVM technique is then used to assess the likelihood of FFS. To select the parameters of SVM models, particle swarm optimization (PSO) is applied. Finally, CART is employed to enable auditors to increase substantive testing during their audit procedures by adopting reliable, easy-to-grasp decision rules. The experiment results show that the SVMFW model can reduce unnecessary information, satisfactorily detect FFS, and provide directions for properly allocating audit resources in limited audits. The model is a promising alternative for detecting FFS caused by top management, and it can assist in both taxation and the banking system. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4143ba04659bf7b46c1733ae42b08956",
"text": "Recent face recognition experiments on the LFW [13] benchmark show that face recognition is performing stunningly well, surpassing human recognition rates. In this paper, we study face recognition at scale. Specifically, we have collected from Flickr a Million faces and evaluated state of the art face recognition algorithms on this dataset. We found that the performance of algorithms varies–while all perform great on LFW, once evaluated at scale recognition rates drop drastically for most algorithms. Interestingly, deep learning based approach by [23] performs much better, but still gets less robust at scale. We consider both verification and identification problems, and evaluate how pose affects recognition at scale. Moreover, we ran an extensive human study on Mechanical Turk to evaluate human recognition at scale, and report results. All the photos are creative commons photos and are released for research and further experiments on http://megaface. cs.washington.edu.",
"title": ""
},
{
"docid": "5ff1dee6b9f04f107b0d02ed07126a20",
"text": "This work explores the problem of autonomously driving a vehicle given live videos obtained from cameras mounted around it. The current approach to this problem is to utilize hand-coded rules that explicitly tell a vehicle how to react given various stimuli (ex. stop signs, lane markings, etc). We instead seek to solve it using a single, end-to-end deep convolutional neural net trained on pairs of images and vehicle control parameters (ex. steering angle, throttle, etc) recorded from an actual driving sequence by a human. When visualizing the class activation maps for end-to-end models that currently exist for this task, it can be seen that the convolutional layers do not activate upon many regions of the image other than just the road. This is a major issue because driving requires understanding of the entire image to properly react to objects like pedestrians, traffic lights, and even other vehicles. We propose a new model that takes advantage of object detectors and multi-task learning to create a network that helps pays more attention to significant areas of an image in an unsupervised manner.",
"title": ""
},
{
"docid": "caa7385e4707f9dc5aa111004698e547",
"text": "Virtual Reality allows rapid prototyping and simulation of physical artefacts, which would be difficult and expensive to perform otherwise. On the other hand, when the design process is complex and involves multiple stakeholders, decisions are taken in meetings hosted in the physical world. In the case of aerospace industrial designs, the process is accelerated by having asymmetric collaboration between the two locations: experts discuss the possibilities in a meeting room while a technician immersed in VR tests the selected alternatives. According to experts, the current approach is not without limitations, and in this work, we present prototypes designed to tackle them. The described artefacts were created to address the main issues: awareness of the remote location, remote interaction and manipulation, and navigation between locations. First feedback from experts regarding the prototypes is also presented. The resulting design considerations can be used in other asymmetric collaborative scenarios.",
"title": ""
},
{
"docid": "35adbc66c3b98543471bbe47cb71e00d",
"text": "Because of their demonstrated capabilities in attaining high rates of advance in civil tunnel construction, the hard rock mining industry has always shown a major interest in the use of TBMs for mine development, primarily for development of entries, as well as ventilation, haulage and production drifts. The successful application of TBM technology to mining depends on the selection of the most suitable equipment and cutting tools for the rock and ground conditions to be encountered. In addition to geotechnical investigations and required rock testing, cutterhead design optimization is an integral part of the machine selection to ensure a successful application of the machines in a specific underground mine environment. This paper presents and discusses selected case histories of TBM applications in mining, the lessons learned, the process of laboratory testing together with machine selection and performance estimation methods.",
"title": ""
},
{
"docid": "f69bb350cdd5b39975f44876bf326c0b",
"text": "Generative Adversarial Networks have emerged as an effective technique for estimating data distributions. The basic setup consists of two deep networks playing against each other in a zero-sum game setting. However, it is not understood if the networks reach an equilibrium eventually and what dynamics makes this possible. The current GAN training procedure, which involves simultaneous gradient descent, lacks a clear game-theoretic justification in the literature. In this paper, we introduce regret minimization as a technique to reach equilibrium in games and use this to motivate the use of simultaneous GD in GANs. In addition, we present a hypothesis that mode collapse, which is a common occurrence in GAN training, happens due to the existence of spurious local equilibria in non-convex games. Motivated by these insights, we develop an algorithm called DRAGAN that is fast, simple to implement and achieves competitive performance in a stable fashion across different architectures, datasets (MNIST, CIFAR-10, and CelebA), and divergence measures with almost no hyperparameter tuning.",
"title": ""
},
{
"docid": "138fc7af52066e890b45afd96debbe91",
"text": "We present a general scheme for analyzing the performance of a generic localization algorithm for multilateration (MLAT) systems (or for other distributed sensor, passive localization technology). MLAT systems are used for airport surface surveillance and are based on time difference of arrival measurements of Mode S signals (replies and 1,090 MHz extended squitter, or 1090ES). In the paper, we propose to consider a localization algorithm as composed of two components: a data model and a numerical method, both being properly defined and described. In this way, the performance of the localization algorithm can be related to the proper combination of statistical and numerical performances. We present and review a set of data models and numerical methods that can describe most localization algorithms. We also select a set of existing localization algorithms that can be considered as the most relevant, and we describe them under the proposed classification. We show that the performance of any localization algorithm has two components, i.e., a statistical one and a numerical one. The statistical performance is related to providing unbiased and minimum variance solutions, while the numerical one is related to ensuring the convergence of the solution. Furthermore, we show that a robust localization (i.e., statistically and numerI. A. Mantilla-Gaviria · J. V. Balbastre-Tejedor Instituto ITACA, Universidad Politécnica de Valencia, Camino de Vera S/N, 46022 Edificio 8G, Acceso B, Valencia, Spain e-mail: [email protected] J. V. Balbastre-Tejedor e-mail: [email protected] M. Leonardi · G. Galati (B) DIE, Tor Vergata University, Via del Politecnico 1, 00133 Rome, Italy e-mail: [email protected]; [email protected] M. Leonardi e-mail: [email protected] ically efficient) strategy, for airport surface surveillance, has to be composed of two specific kind of algorithms. Finally, an accuracy analysis, by using real data, is performed for the analyzed algorithms; some general guidelines are drawn and conclusions are provided.",
"title": ""
},
{
"docid": "b34eb302108ffd515ed9fc896fa7015f",
"text": "Recent magnetoencephalography (MEG) and functional magnetic resonance imaging studies of human auditory cortex are pointing to brain areas on lateral Heschl's gyrus as the 'pitch-processing center'. Here we describe results of a combined MEG-psychophysical study designed to investigate the timing of the formation of the percept of pitch and the generality of the hypothesized 'pitch-center'. We compared the cortical and behavioral responses to Huggins pitch (HP), a stimulus requiring binaural processing to elicit a pitch percept, with responses to tones embedded in noise (TN)-perceptually similar but physically very different signals. The stimuli were crafted to separate the electrophysiological responses to onset of the pitch percept from the onset of the initial stimulus. Our results demonstrate that responses to monaural pitch stimuli are affected by cross-correlational processes in the binaural pathway. Additionally, we show that MEG illuminates processes not simply observable in behavior. Crucially, the MEG data show that, although physically disparate, both HP and TN are mapped onto similar representations by 150 ms post-onset, and provide critical new evidence that the 'pitch onset response' reflects central pitch mechanisms, in agreement with models postulating a single, central pitch extractor.",
"title": ""
},
{
"docid": "da302043eecd427e70c48c28df189aa3",
"text": "Recent advances in electronics and wireless communication technologies have enabled the development of large-scale wireless sensor networks that consist of many low-power, low-cost, and small-size sensor nodes. Sensor networks hold the promise of facilitating large-scale and real-time data processing in complex environments. Security is critical for many sensor network applications, such as military target tracking and security monitoring. To provide security and privacy to small sensor nodes is challenging, due to the limited capabilities of sensor nodes in terms of computation, communication, memory/storage, and energy supply. In this article we survey the state of the art in research on sensor network security.",
"title": ""
},
{
"docid": "a2e023ddce575057617bd9aec023abe4",
"text": "The Kelly betting criterion ignores uncertainty in the probability of winning the bet, and uses an estimated probability. In general, such replacement of population parameters by sample estimates gives poorer out-of-sample than in-sample performance. We show that to improve out-of-sample performance the size of the bet should be shrunk in the presence of this parameter uncertainty, and compare some estimates of the shrinkage factor. From a simulation study and from an analysis of some tennis betting data we show that the shrunken Kelly approaches developed here offer an improvement over the ‘raw’ Kelly criterion. One approximate estimate of the shrinkage factor gives a ‘back of envelope’ correction to the Kelly criterion that could easily be used by bettors. We also study bet shrinkage and swelling for general risk-averse utility functions, and discuss the general implications of such results for decision theory.",
"title": ""
}
] |
scidocsrr
|
75a5ae7f80d0f987077bd928d1d16e9a
|
CRF based road detection with multi-sensor fusion
|
[
{
"docid": "d5233cdbe0044f2296be6136f459edcf",
"text": "Road detection is one of the key issues of scene understanding for Advanced Driving Assistance Systems (ADAS). Recent approaches has addressed this issue through the use of different kinds of sensors, features and algorithms. KITTI-ROAD benchmark has provided an open-access dataset and standard evaluation mean for road area detection. In this paper, we propose an improved road detection algorithm that provides a pixel-level confidence map. The proposed approach is inspired from our former work based on road feature extraction using illuminant intrinsic image and plane extraction from v-disparity map segmentation. In the former research, detection results of road area are represented by binary map. The novelty of this improved algorithm is to introduce likelihood theory to build a confidence map of road detection. Such a strategy copes better with ambiguous environments, compared to a simple binary map. Evaluations and comparisons of both, binary map and confidence map, have been done using the KITTI-ROAD benchmark.",
"title": ""
},
{
"docid": "fd59153fb058ae604d889ea5f77abebb",
"text": "We describe a realtime system for finding and tracking unstructured paths in off-road conditions. The system was designed as part of the recent Darpa Grand Challenge and was tested over hundreds of miles of off-road driving. The unique feature of our approach is to combine geometric projection used for recovering Pitch and Yaw with Learning approaches for identifying familiar \"drivable\" regions in the scene. The region-based component segments the image to \"path\" and \"non-path\" regions based on texture analysis borne out of a learning-by-examples principle. The boundary-based component looks for the path bounding lines assuming a geometric model of a planar pathway bounded by parallel edges taken by a perspective camera. The combined effect of both sub-systems forms a robust system capable of finding the path even in situations where the vehicle is positioned out of the path - a situation which is not common for human drivers but is relevant for autonomous driving where the vehicle may find itself occasionally veering out of the path.",
"title": ""
},
{
"docid": "099bd9e751b8c1e3a07ee06f1ba4b55b",
"text": "This paper presents a robust stereo-vision-based drivable road detection and tracking system that was designed to navigate an intelligent vehicle through challenging traffic scenarios and increment road safety in such scenarios with advanced driver-assistance systems (ADAS). This system is based on a formulation of stereo with homography as a maximum a posteriori (MAP) problem in a Markov random held (MRF). Under this formulation, we develop an alternating optimization algorithm that alternates between computing the binary labeling for road/nonroad classification and learning the optimal parameters from the current input stereo pair itself. Furthermore, online extrinsic camera parameter reestimation and automatic MRF parameter tuning are performed to enhance the robustness and accuracy of the proposed system. In the experiments, the system was tested on our experimental intelligent vehicles under various real challenging scenarios. The results have substantiated the effectiveness and the robustness of the proposed system with respect to various challenging road scenarios such as heterogeneous road materials/textures, heavy shadows, changing illumination and weather conditions, and dynamic vehicle movements.",
"title": ""
}
] |
[
{
"docid": "d552b6beeea587bc014a4c31cabee121",
"text": "Recent successes of neural networks in solving combinatorial problems and games like Go, Poker and others inspire further attempts to use deep learning approaches in discrete domains. In the field of automated planning, the most popular approach is informed forward search driven by a heuristic function which estimates the quality of encountered states. Designing a powerful and easily-computable heuristics however is still a challenging problem on many domains. In this paper, we use machine learning to construct such heuristic automatically. We train a neural network to predict a minimal number of moves required to solve a given instance of Rubik’s cube. We then use the trained network as a heuristic distance estimator with a standard forward-search algorithm and compare the results with other heuristics. Our experiments show that the learning approach is competitive with state-of-the-art and might be the best choice in some use-case scenarios.",
"title": ""
},
{
"docid": "27312c44c3e453ad9e5f35a45b50329c",
"text": "The immunologic processes involved in Graves' disease (GD) have one unique characteristic--the autoantibodies to the TSH receptor (TSHR)--which have both linear and conformational epitopes. Three types of TSHR antibodies (stimulating, blocking, and cleavage) with different functional capabilities have been described in GD patients, which induce different signaling effects varying from thyroid cell proliferation to thyroid cell death. The establishment of animal models of GD by TSHR antibody transfer or by immunization with TSHR antigen has confirmed its pathogenic role and, therefore, GD is the result of a breakdown in TSHR tolerance. Here we review some of the characteristics of TSHR antibodies with a special emphasis on new developments in our understanding of what were previously called \"neutral\" antibodies and which we now characterize as autoantibodies to the \"cleavage\" region of the TSHR ectodomain.",
"title": ""
},
{
"docid": "b2996ae8d3cab83ba6e5b459fc4631d0",
"text": "This paper develops a novel sparse Bayesian learning (SBL)-based multiple-input multiple-output (MIMO) channel estimation technique for hybrid millimeter wave (mmWave) wireless systems by exploiting spatial sparsity in the wireless channels arising from the highly directional nature of propagation. The spatially sparse MIMO channel is modeled in terms of the basis array response matrices corresponding to the quantized directional cosines at the transmit and receive antenna arrays followed by the development of an expectation maximization (EM)-based sparse Bayesian learning (SBL) channel estimation approach. Subsequently, an enhanced variant of the SBL scheme is proposed based on hard thresholding the associated hyperparameter estimates, which is observed to significantly improve the accuracy of channel estimation. The Bayesian Cramér-Rao bound (BCRB) is also derived to benchmark the accuracy of the proposed SBL-based channel estimation schemes. Finally, simulation results are presented to illustrate the performance improvement achieved in comparison to the existing state-of-the-art orthogonal matching pursuit (OMP)-based sparse mmWave channel estimation scheme.",
"title": ""
},
{
"docid": "6ec3c98e78e78303a0dc0068ab90a17d",
"text": "INTRODUCTION\nIn this study we report a large series of patients with unilateral winged scapula (WS), with special attention to long thoracic nerve (LTN) palsy.\n\n\nMETHODS\nClinical and electrodiagnostic data were collected from 128 patients over a 25-year period.\n\n\nRESULTS\nCauses of unilateral WS were LTN palsy (n = 70), spinal accessory nerve (SAN) palsy (n = 39), both LTN and SAN palsy (n = 5), facioscapulohumeral dystrophy (FSH) (n = 5), orthopedic causes (n = 11), voluntary WS (n = 6), and no definite cause (n = 2). LTN palsy was related to neuralgic amyotrophy (NA) in 61 patients and involved the right side in 62 patients.\n\n\nDISCUSSION\nClinical data allow for identifying 2 main clinical patterns for LTN and SAN palsy. Electrodiagnostic examination should consider bilateral nerve conduction studies of the LTN and SAN, and needle electromyography of their target muscles. LTN palsy is the most frequent cause of unilateral WS and is usually related to NA. Voluntary WS and FSH must be considered in young patients. Muscle Nerve 57: 913-920, 2018.",
"title": ""
},
{
"docid": "baf3101f70784ff4dfb85cda627575e7",
"text": "Synchronous reluctance (SynRel) machines are gaining more and more importance in various fields of application thanks to their known merits like rugged construction, high efficiency, absence of field windings, and no or reduced need for permanent magnets. Out of the possible design variants, in this paper, SynRel motors with uniform mechanical air gap and circularly shaped flux barriers are considered and a conformal-mapping approach to their analytical modeling and simulation is proposed. A suitable conformal transformation is introduced to compute the reluctance of each rotor circularly shaped flux barrier and the result is then used to analytically determine the air-gap flux density distribution and the electromagnetic torque of the machine in arbitrary operating conditions. The accuracy of the methodology proposed is assessed against finite element analysis.",
"title": ""
},
{
"docid": "1d33abf7f0283b5b1fb6c6d7cddb8668",
"text": "Harmonic current is one of the major power quality problems due to non linear loads. In this paper the harmonic current in induction furnace load is compensated using active power filter. The inverters and rectifier switching creates harmonics in input side. It is then compensated by using active power filter with p-q theory and PI controller. The simulation result are obtained with FFT analysis and shows the minimisation of current harmonic.",
"title": ""
},
{
"docid": "cf34ffc1e1c32c930a3872dc463950f1",
"text": "Mobile advertising is evolving rapidly and becoming the key mobile data and revenue drivers of the mobile contents market. More powerful mobile devices have made possible the creation of better and richer mobile advertising. Moreover, the integration of location-aware technologies such as Cell Identification and GPS (Global Positioning Systems) into mobile devices has inspired the development of location-based advertising (LBA). As location-based services (LBS) have the potential to become the first realizable example of ubiquitous computing, business opportunities from these appear quite feasible. LBA can provide relevant, targeted, and timely advertising information to consumers at the point of need. The purpose of this study is to investigate consumer attitudes toward LBA, and the",
"title": ""
},
{
"docid": "46938d041228481cf3363f2c6dfcc524",
"text": "This paper investigates conditions under which modi cations to the reward function of a Markov decision process preserve the op timal policy It is shown that besides the positive linear transformation familiar from utility theory one can add a reward for tran sitions between states that is expressible as the di erence in value of an arbitrary poten tial function applied to those states Further more this is shown to be a necessary con dition for invariance in the sense that any other transformation may yield suboptimal policies unless further assumptions are made about the underlying MDP These results shed light on the practice of reward shap ing a method used in reinforcement learn ing whereby additional training rewards are used to guide the learning agent In par ticular some well known bugs in reward shaping procedures are shown to arise from non potential based rewards and methods are given for constructing shaping potentials corresponding to distance based and subgoal based heuristics We show that such po tentials can lead to substantial reductions in learning time",
"title": ""
},
{
"docid": "18136fba311484e901282c31c9d206fd",
"text": "New demands, coming from the industry 4.0 concept of the near future production systems have to be fulfilled in the coming years. Seamless integration of current technologies with new ones is mandatory. The concept of Cyber-Physical Production Systems (CPPS) is the core of the new control and automation distributed systems. However, it is necessary to provide the global production system with integrated architectures that make it possible. This work analyses the requirements and proposes a model-based architecture and technologies to make the concept a reality.",
"title": ""
},
{
"docid": "18c3d950c4a2394185543a0f08bc1717",
"text": "Prediction is pervasive in human cognition and plays a central role in language comprehension. At an electrophysiological level, this cognitive function contributes substantially in determining the amplitude of the N400. In fact, the amplitude of the N400 to words within a sentence has been shown to depend on how predictable those words are: The more predictable a word, the smaller the N400 elicited. However, predictive processing can be based on different sources of information that allow anticipation of upcoming constituents and integration in context. In this study, we investigated the ERPs elicited during the comprehension of idioms, that is, prefabricated multiword strings stored in semantic memory. When a reader recognizes a string of words as an idiom before the idiom ends, she or he can develop expectations concerning the incoming idiomatic constituents. We hypothesized that the expectations driven by the activation of an idiom might differ from those driven by discourse-based constraints. To this aim, we compared the ERP waveforms elicited by idioms and two literal control conditions. The results showed that, in both cases, the literal conditions exhibited a more negative potential than the idiomatic condition. Our analyses suggest that before idiom recognition the effect is due to modulation of the N400 amplitude, whereas after idiom recognition a P300 for the idiomatic sentence has a fundamental role in the composition of the effect. These results suggest that two distinct predictive mechanisms are at work during language comprehension, based respectively on probabilistic information and on categorical template matching.",
"title": ""
},
{
"docid": "8777657edaa9e2a985ab6865490865b3",
"text": "Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting which information should be mentioned and in which order and then generate the document while taking the content plan into account. Automatic and human-based evaluation experiments show that our model outperforms strong baselines improving the state-of-the-art on the recently released ROTOWIRE dataset.",
"title": ""
},
{
"docid": "0a2cba5e6d5b6b467e34e79ee099f509",
"text": "Wearable devices are used in various applications to collect information including step information, sleeping cycles, workout statistics, and health-related information. Due to the nature and richness of the data collected by such devices, it is important to ensure the security of the collected data. This paper presents a new lightweight authentication scheme suitable for wearable device deployment. The scheme allows a user to mutually authenticate his/her wearable device(s) and the mobile terminal (e.g., Android and iOS device) and establish a session key among these devices (worn and carried by the same user) for secure communication between the wearable device and the mobile terminal. The security of the proposed scheme is then demonstrated through the broadly accepted real-or-random model, as well as using the popular formal security verification tool, known as the Automated validation of Internet security protocols and applications. Finally, we present a comparative summary of the proposed scheme in terms of the overheads such as computation and communication costs, security and functionality features of the proposed scheme and related schemes, and also the evaluation findings from the NS2 simulation.",
"title": ""
},
{
"docid": "492c5a20c4ef5b7a3ea08083ecf66bce",
"text": "We present the design for an absorbing metamaterial (MM) with near unity absorbance A(omega). Our structure consists of two MM resonators that couple separately to electric and magnetic fields so as to absorb all incident radiation within a single unit cell layer. We fabricate, characterize, and analyze a MM absorber with a slightly lower predicted A(omega) of 96%. Unlike conventional absorbers, our MM consists solely of metallic elements. The substrate can therefore be optimized for other parameters of interest. We experimentally demonstrate a peak A(omega) greater than 88% at 11.5 GHz.",
"title": ""
},
{
"docid": "7560af7ed6d3a2ca48c7be047e90ac47",
"text": "In the domain of computer games, research into the interaction between player and game has centred on 'enjoyment', often drawing in particular on optimal experience research and Csikszentmihalyi's 'Flow theory'. Flow is a well-established construct for examining experience in any setting and its application to game-play is intuitive. Nevertheless, it's not immediately obvious how to translate between the flow construct and an operative description of game-play. Previous research has attempted this translation through analogy. In this article we propose a practical, integrated approach for analysis of the mechanics and aesthetics of game-play, which helps develop deeper insights into the capacity for flow within games.\n The relationship between player and game, characterized by learning and enjoyment, is central to our analysis. We begin by framing that relationship within Cowley's user-system-experience (USE) model, and expand this into an information systems framework, which enables a practical mapping of flow onto game-play. We believe this approach enhances our understanding of a player's interaction with a game and provides useful insights for games' researchers seeking to devise mechanisms to adapt game-play to individual players.",
"title": ""
},
{
"docid": "34257e8924d8f9deec3171589b0b86f2",
"text": "The topics treated in The brain and emotion include the definition, nature, and functions of emotion (Ch. 3); the neural bases of emotion (Ch. 4); reward, punishment, and emotion in brain design (Ch. 10); a theory of consciousness and its application to understanding emotion and pleasure (Ch. 9); and neural networks and emotion-related learning (Appendix). The approach is that emotions can be considered as states elicited by reinforcers (rewards and punishers). This approach helps with understanding the functions of emotion, with classifying different emotions, and in understanding what information-processing systems in the brain are involved in emotion, and how they are involved. The hypothesis is developed that brains are designed around reward- and punishment-evaluation systems, because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase fitness (Ch. 10). By specifying goals rather than particular behavioral patterns of responses, genes leave much more open the possible behavioral strategies that might be required to increase fitness. The importance of reward and punishment systems in brain design also provides a basis for understanding the brain mechanisms of motivation, as described in Chapters 2 for appetite and feeding, 5 for brain-stimulation reward, 6 for addiction, 7 for thirst, and 8 for sexual behavior.",
"title": ""
},
{
"docid": "9e289058f404720f73ee8240a84db54d",
"text": "PURPOSE OF THE STUDY\nWe assessed whether a shared site intergenerational care program informed by contact theory contributed to more desirable social behaviors of elders and children during intergenerational programming than a center with a more traditional programming approach that lacks some or all of the contact theory tenets.\n\n\nDESIGN AND METHODS\nWe observed 59 elder and child participants from the two sites during intergenerational activities. Using the Intergenerational Observation Scale, we coded participants' predominant behavior in 15-s intervals through each activity's duration. We then calculated for each individual the percentage of time frames each behavior code was predominant.\n\n\nRESULTS\nParticipants at the theory-based program demonstrated higher rates of intergenerational interaction, higher rates of solitary behavior, and lower rates of watching than at the traditional program.\n\n\nIMPLICATIONS\nContact theory tenets were optimized when coupled with evidence-based practices. Intergenerational programs with stakeholder support that promotes equal group status, cooperation toward a common goal, and mechanisms of friendship among participants can achieve important objectives for elder and child participants in care settings.",
"title": ""
},
{
"docid": "a39fb4e8c15878ba4fdac54f02451789",
"text": "The Cloud computing system can be easily threatened by various attacks, because most of the cloud computing systems provide service to so many people who are not proven to be trustworthy. Due to their distributed nature, cloud computing environment are easy targets for intruders[1]. There are various Intrusion Detection Systems having various specifications to each. Cloud computing have two approaches i. e. Knowledge-based IDS and Behavior-Based IDS to detect intrusions in cloud computing. Behavior-Based IDS assumes that an intrusion can be detected by observing a deviation from normal to expected behavior of the system or user[2]s. Knowledge-based IDS techniques apply knowledge",
"title": ""
},
{
"docid": "703ff903a68627b15c0840777c60e615",
"text": "S is of interest to researchers because of its ambiguous nature—sarcasm research can provide insight into how we use different types of linguistic and extralinguistic information to process ambiguous language of many types (Rockwell, 2005). We can also use knowledge about how sarcasm functions in conversation to inform our understanding of pragmatic language difficulties present in people with particular brain injuries or developmental disabilities, such as Autism Spectrum Disorders or Specific Language Impairment (Ryder & Leinonen, 2014; Surian, Baron-Cohen, & Van der Lely, 1996). For example, determining which kinds of information people typically use to detect sarcasm can help practitioners assist patients with pragmatic difficulties in identifying and interpreting nonliteral speech. Constructing a complete explanatory theory of sarcasm interpretation has been a source of controversy since Grice’s foundational 1975 paper, “Logic and Conversation,” which proposes that conversational partners assume one another to be acting cooperatively within the current speech exchange, and thus often interpret an apparent breach of conversational protocol as an indirect way of implying something which has not actually been said. Two major countertheories emerged soon after—Mention Theory and Pretense Theory—with criticisms of Grice and new ways to conceive of sarcastic utterances altogether. Both theories posit that only the literal meaning of the utterance is represented, and the speaker’s attitude toward the utterance is indicated by other methods. These single-meaning models differ from Grice’s in that they eliminate the need for an inferred “inverted” meaning of the utterance. A modified version of Grice’s original theory accounts for these criticisms as well as newer experimental evidence regarding the processing of nonliteral language. While these theories emphasize the interpretation of sarcasm, more recent sarcasm research focuses on sarcasm detection, specifically which kinds of information hearers utilize when determining whether a given utterance is sarcastic. For instance, while vocal and visual cues may be available to hearers for sarcasm detection in a face-to-face conversation, contextual and lexical cues may become more important in written language. Couched in Gricean terms and expanding upon previous research on sarcasm cues, I propose a particular set of sarcasm cues that rely on an additional violation of Grice’s maxims as a way of directing a conversational partner’s attention to the nonliteral nature of a sarcastic utterance. In order to test the validity of these maxim-violation cues, a set of tweets was compiled, half of which were sarcastic and half of which were intended literally. Half of each of those groups contained maxim-violation cues, while half did not. The use of tweets as the source of the utterances was intended to target language which did not make use of more well-studied sarcasm cues such as body language, intonation, and, for the most part, conversational context. Participants were asked whether each of the tweets was sarcastic. Participants also recorded how certain they were of their sarcasm detection for each tweet, providing a built-in tool to disregard noise in the data due to random guesses. The presence of maxim-violation cues predicted a statistically significant amount of variance in sarcasm detection. The effect of sarcastic intent on sarcasm detection was also statistically significant, suggesting that other content-based cues (aside from maxim-violation cues) come into play even in written language with little context. This paper will be organized as follows: Section 1 will establish a theoretical framework within which to analyze sarcasm detection; section 2 will introduce the concepts of sarcasm detection",
"title": ""
},
{
"docid": "39168bcf3cd49c13c86b13e89197ce7d",
"text": "An unprecedented booming has been witnessed in the research area of artistic style transfer ever since Gatys et al. introduced the neural method. One of the remaining challenges is to balance a trade-off among three critical aspects—speed, flexibility, and quality: (i) the vanilla optimization-based algorithm produces impressive results for arbitrary styles, but is unsatisfyingly slow due to its iterative nature, (ii) the fast approximation methods based on feed-forward neural networks generate satisfactory artistic effects but bound to only a limited number of styles, and (iii) feature-matching methods like AdaIN achieve arbitrary style transfer in a real-time manner but at a cost of the compromised quality. We find it considerably difficult to balance the trade-off well merely using a single feed-forward step and ask, instead, whether there exists an algorithm that could adapt quickly to any style, while the adapted model maintains high efficiency and good image quality. Motivated by this idea, we propose a novel method, coined MetaStyle, which formulates the neural style transfer as a bilevel optimization problem and combines learning with only a few post-processing update steps to adapt to a fast approximation model with satisfying artistic effects, comparable to the optimization-based methods for an arbitrary style. The qualitative and quantitative analysis in the experiments demonstrates that the proposed approach achieves high-quality arbitrary artistic style transfer effectively, with a good trade-off among speed, flexibility, and quality.",
"title": ""
},
{
"docid": "007f741a718d0c4a4f181676a39ed54a",
"text": "Following the development of computing and communication technologies, the idea of Internet of Things (IoT) has been realized not only at research level but also at application level. Among various IoT-related application fields, biometrics applications, especially face recognition, are widely applied in video-based surveillance, access control, law enforcement and many other scenarios. In this paper, we introduce a Face in Video Recognition (FivR) framework which performs real-time key-frame extraction on IoT edge devices, then conduct face recognition using the extracted key-frames on the Cloud back-end. With our key-frame extraction engine, we are able to reduce the data volume hence dramatically relief the processing pressure of the cloud back-end. Our experimental results show with IoT edge device acceleration, it is possible to implement face in video recognition application without introducing the middle-ware or cloud-let layer, while still achieving real-time processing speed.",
"title": ""
}
] |
scidocsrr
|
39707751b2f6aaea677ef953aee4ed47
|
Bridgeless SEPIC PFC Rectifier With Reduced Components and Conduction Losses
|
[
{
"docid": "b79110b1145fc8a35f20efdf0029fbac",
"text": "In this paper, a new bridgeless single-phase AC-DC converter with an automatic power factor correction (PFC) is proposed. The proposed rectifier is based on the single-ended primary inductance converter (SEPIC) topology and it utilizes a bidirectional switch and two fast diodes. The absence of an input diode bridge and the presence of only one diode in the flowing-current path during each switching cycle result in less conduction loss and improved thermal management compared to existing PFC rectifiers. Other advantages include simple control circuitry, reduced switch voltage stress, and low electromagnetic-interference noise. Performance comparison between the proposed and the conventional SEPIC PFC rectifier is performed. Simulation and experimental results are presented to demonstrate the feasibility of the proposed technique.",
"title": ""
}
] |
[
{
"docid": "5d6c2580602945084d5a643c335c40f2",
"text": "Probabilistic topic models are a suite of algorithms whose aim is to discover the hidden thematic structure in large archives of documents. In this article, we review the main ideas of this field, survey the current state-of-the-art, and describe some promising future directions. We first describe latent Dirichlet allocation (LDA) [8], which is the simplest kind of topic model. We discuss its connections to probabilistic modeling, and describe two kinds of algorithms for topic discovery. We then survey the growing body of research that extends and applies topic models in interesting ways. These extensions have been developed by relaxing some of the statistical assumptions of LDA, incorporating meta-data into the analysis of the documents, and using similar kinds of models on a diversity of data types such as social networks, images and genetics. Finally, we give our thoughts as to some of the important unexplored directions for topic modeling. These include rigorous methods for checking models built for data exploration, new approaches to visualizing text and other high dimensional data, and moving beyond traditional information engineering applications towards using topic models for more scientific ends.",
"title": ""
},
{
"docid": "14f7eb98dc3d24c94eb733a438127893",
"text": "Web users exhibit a variety of navigational interests through clicking a sequence of Web pages. Analysis of Web usage data will lead to discover Web user access pattern and facilitate users locate more preferable Web pages via collaborative recommending technique. Meanwhile, latent semantic analysis techniques provide a powerful means to capture user access pattern and associated task space. In this paper, we propose a collaborative Web recommendation framework, which employs Latent Dirichlet Allocation (LDA) to model underlying topic-simplex space and discover the associations between user sessions and multiple topics via probability inference. Experiments conducted on real Website usage dataset show that this approach can achieve better recommendation accuracy in comparison to existing techniques. The discovered topic-simplex expression can also provide a better interpretation of user navigational preference",
"title": ""
},
{
"docid": "996ed1bfadc4363d4717c6bd4da6ab89",
"text": "The recognition of dyslexia as a neurodevelopmental disorder has been hampered by the belief that it is not a specific diagnostic entity because it has variable and culture-specific manifestations. In line with this belief, we found that Italian dyslexics, using a shallow orthography which facilitates reading, performed better on reading tasks than did English and French dyslexics. However, all dyslexics were equally impaired relative to their controls on reading and phonological tasks. Positron emission tomography scans during explicit and implicit reading showed the same reduced activity in a region of the left hemisphere in dyslexics from all three countries, with the maximum peak in the middle temporal gyrus and additional peaks in the inferior and superior temporal gyri and middle occipital gyrus. We conclude that there is a universal neurocognitive basis for dyslexia and that differences in reading performance among dyslexics of different countries are due to different orthographies.",
"title": ""
},
{
"docid": "4345ed089e019402a5a4e30497bccc8a",
"text": "BACKGROUND\nFluridil, a novel topical antiandrogen, suppresses the human androgen receptor. While highly hydrophobic and hydrolytically degradable, it is systemically nonresorbable. In animals, fluridil demonstrated high local and general tolerance.\n\n\nOBJECTIVE\nTo evaluate the safety and efficacy of a topical anti- androgen, fluridil, in male androgenetic alopecia.\n\n\nMETHODS\nIn 20 men, for 21 days, occlusive forearm patches with 2, 4, and 6% fluridil, isopropanol, and/or vaseline were applied. In 43 men with androgenetic alopecia (AGA), Norwood grade II-Va, 2% fluridil was evaluated in a double-blind, placebo-controlled study after 3 months clinically by phototrichograms, hematology, and blood chemistry including analysis for fluridil, and at 9 months by phototrichograms.\n\n\nRESULTS\nNeither fluridil nor isopropanol showed sensitization/irritation potential, unlike vaseline. In all AGA subjects, baseline anagen/telogen counts were equal. After 3 months, the average anagen percentage did not change in placebo subjects, but increased in fluridil subjects from 76% to 85%, and at 9 months to 87%. In former placebo subjects, fluridil increased the anagen percentage after 6 months from 76% to 85%. Sexual functions, libido, hematology, and blood chemistry values were normal throughout, except that at 3 months, in the spring, serum testosterone increased within the normal range equally in placebo and fluridil groups. No fluridil or its decomposition product, BP-34, was detectable in the serum at 0, 3, or 90 days.\n\n\nCONCLUSION\nTopical fluridil is nonirritating, nonsensitizing, nonresorbable, devoid of systemic activity, and anagen promoting after daily use in most AGA males.",
"title": ""
},
{
"docid": "962003dc153dcb7cce754be8846ad62b",
"text": "Though the growing popularity of software-based middleboxes raises new requirements for network stack functionality, existing network stack have fundamental challenges in supporting the development of high-performance middlebox applications in a fast and flexible manner. In this work, we design and implement an enriched, programmable, and extensible network stack and its API to support the various requirements of middlebox applications. mOS supports proxy and monitoring function as well as traditional end TCP stack function. Further, we allow applications extend TCP functionality by hooking in middle of TCP processing and define user-level events on TCP state. Meanwhile, Epoll-like API allows applications manipulate read/write from/to byte stream buffers in an efficient way. To support an efficient consolidation of multiple middlebox applications in a single machine, mOS will allow multiple middlebox applications share the same TCP processing context without duplicated IP/TCP processing. We show that mOS can support various middlebox applications in an easy and efficient way without building TCP functionality from scratch.",
"title": ""
},
{
"docid": "3e357c91292ba1e1055fc3a493aba4eb",
"text": "The study of online social networks has attracted increasing interest. However, concerns are raised for the privacy risks of user data since they have been frequently shared among researchers, advertisers, and application developers. To solve this problem, a number of anonymization algorithms have been recently developed for protecting the privacy of social graphs. In this article, we proposed a graph node similarity measurement in consideration with both graph structure and descriptive information, and a deanonymization algorithm based on the measurement. Using the proposed algorithm, we evaluated the privacy risks of several typical anonymization algorithms on social graphs with thousands of nodes from Microsoft Academic Search, LiveJournal, and the Enron email dataset, and a social graph with millions of nodes from Tencent Weibo. Our results showed that the proposed algorithm was efficient and effective to deanonymize social graphs without any initial seed mappings. Based on the experiments, we also pointed out suggestions on how to better maintain the data utility while preserving privacy.",
"title": ""
},
{
"docid": "554a0628270978757eda989c67ac3416",
"text": "An accurate rainfall forecasting is very important for agriculture dependent countries like India. For analyzing the crop productivity, use of water resources and pre-planning of water resources, rainfall prediction is important. Statistical techniques for rainfall forecasting cannot perform well for long-term rainfall forecasting due to the dynamic nature of climate phenomena. Artificial Neural Networks (ANNs) have become very popular, and prediction using ANN is one of the most widely used techniques for rainfall forecasting. This paper provides a detailed survey and comparison of different neural network architectures used by researchers for rainfall forecasting. The paper also discusses the issues while applying different neural networks for yearly/monthly/daily rainfall forecasting. Moreover, the paper also presents different accuracy measures used by researchers for evaluating performance of ANN.",
"title": ""
},
{
"docid": "7b1a6768cc6bb975925a754343dc093c",
"text": "In response to the increasing volume of trajectory data obtained, e.g., from tracking athletes, animals, or meteorological phenomena, we present a new space-efficient algorithm for the analysis of trajectory data. The algorithm combines techniques from computational geometry, data mining, and string processing and offers a modular design that allows for a user-guided exploration of trajectory data incorporating domain-specific constraints and objectives.",
"title": ""
},
{
"docid": "3c1db6405945425c61495dd578afd83f",
"text": "This paper describes a novel driver-support system that helps to maintain the correct speed and headway (distance) with respect to lane curvature and other vehicles ahead. The system has been developed as part of the Integrating Project PReVENT under the European Framework Programme 6, which is named SAfe SPEed and safe distaNCE (SASPENCE). The application uses a detailed description of the situation ahead of the vehicle. Many sensors [radar, video camera, Global Positioning System (GPS) and accelerometers, digital maps, and vehicle-to-vehicle wireless local area network (WLAN) connections] are used, and state-of-the-art data fusion provides a model of the environment. The system then computes a feasible maneuver and compares it with the driver's behavior to detect possible mistakes. The warning strategies are based on this comparison. The system “talks” to the driver mainly via a haptic pedal or seat belt and “listens” to the driver mainly via the vehicle acceleration. This kind of operation, i.e., the comparison between what the system thinks is possible and what the driver appears to be doing, and the consequent dialog can be regarded as simple implementations of the rider-horse metaphor (H-metaphor). The system has been tested in several situations (driving simulator, hardware in the loop, and real road tests). Objective and subjective data have been collected, revealing good acceptance and effectiveness, particularly in awakening distracted drivers. The system intervenes only when a problem is actually detected in the headway and/or speed (approaching curves or objects) and has been shown to cause prompt reactions and significant speed correction before getting into really dangerous situations.",
"title": ""
},
{
"docid": "0fca0826e166ddbd4c26fe16086ff7ec",
"text": "Enteric redmouth disease (ERM) is a serious septicemic bacterial disease of salmonid fish species. It is caused by Yersinia ruckeri, a Gram-negative rod-shaped enterobacterium. It has a wide host range, broad geographical distribution, and causes significant economic losses in the fish aquaculture industry. The disease gets its name from the subcutaneous hemorrhages, it can cause at the corners of the mouth and in gums and tongue. Other clinical signs include exophthalmia, darkening of the skin, splenomegaly and inflammation of the lower intestine with accumulation of thick yellow fluid. The bacterium enters the fish via the secondary gill lamellae and from there it spreads to the blood and internal organs. Y. ruckeri can be detected by conventional biochemical, serological and molecular methods. Its genome is 3.7 Mb with 3406-3530 coding sequences. Several important virulence factors of Y. ruckeri have been discovered, including haemolyin YhlA and metalloprotease Yrp1. Both non-specific and specific immune responses of fish during the course of Y. ruckeri infection have been well characterized. Several methods of vaccination have been developed for controlling both biotype 1 and biotype 2 Y. ruckeri strains in fish. This review summarizes the current state of knowledge regarding enteric redmouth disease and Y. ruckeri: diagnosis, genome, virulence factors, interaction with the host immune responses, and the development of vaccines against this pathogen.",
"title": ""
},
{
"docid": "77ac1b0810b308cf9e957189c832f421",
"text": "We describe TensorFlow-Serving, a system to serve machine learning models inside Google which is also available in the cloud and via open-source. It is extremely flexible in terms of the types of ML platforms it supports, and ways to integrate with systems that convey new models and updated versions from training to serving. At the same time, the core code paths around model lookup and inference have been carefully optimized to avoid performance pitfalls observed in naive implementations. Google uses it in many production deployments, including a multi-tenant model hosting service called TFS2.",
"title": ""
},
{
"docid": "fa471f49367e03e57e7739d253385eaf",
"text": "■ Abstract The literature on effects of habitat fragmentation on biodiversity is huge. It is also very diverse, with different authors measuring fragmentation in different ways and, as a consequence, drawing different conclusions regarding both the magnitude and direction of its effects. Habitat fragmentation is usually defined as a landscape-scale process involving both habitat loss and the breaking apart of habitat. Results of empirical studies of habitat fragmentation are often difficult to interpret because ( a) many researchers measure fragmentation at the patch scale, not the landscape scale and ( b) most researchers measure fragmentation in ways that do not distinguish between habitat loss and habitat fragmentation per se, i.e., the breaking apart of habitat after controlling for habitat loss. Empirical studies to date suggest that habitat loss has large, consistently negative effects on biodiversity. Habitat fragmentation per se has much weaker effects on biodiversity that are at least as likely to be positive as negative. Therefore, to correctly interpret the influence of habitat fragmentation on biodiversity, the effects of these two components of fragmentation must be measured independently. More studies of the independent effects of habitat loss and fragmentation per se are needed to determine the factors that lead to positive versus negative effects of fragmentation per se. I suggest that the term “fragmentation” should be reserved for the breaking apart of habitat, independent of habitat loss.",
"title": ""
},
{
"docid": "ff8fd8bebb7e86b8d636ae528901b57f",
"text": "The ICH quality vision introduced the concept of quality by design (QbD), which requires a greater understanding of the raw material attributes, of process parameters, of their variability and their interactions. Microcrystalline cellulose (MCC) is one of the most important tableting excipients thanks to its outstanding dry binding properties, enabling the manufacture of tablets by direct compression (DC). DC remains the most economical technique to produce large batches of tablets, however its efficacy is directly impacted by the raw material attributes. Therefore excipients' variability and their impact on drug product performance need to be thoroughly understood. To help with this process, this review article gathers prior knowledge on MCC, focuses on its use in DC and lists some of its potential critical material attributes (CMAs).",
"title": ""
},
{
"docid": "38a7f57900474553f6979131e7f39e5d",
"text": "A cascade switched-capacitor ΔΣ analog-to-digital converter, suitable for WLANs, is presented. It uses a double-sampling scheme with single set of DAC capacitors, and an improved low-distortion architecture with an embedded-adder integrator. The proposed architecture eliminates one active stage, and reduces the output swings in the loop-filter and hence the non-linearity. It was fabricated with a 0.18um CMOS process. The prototype chip achieves 75.5 dB DR, 74 dB SNR, 73.8 dB SNDR, −88.1 dB THD, and 90.2 dB SFDR over a 10 MHz signal band with an FoM of 0.27 pJ/conv-step.",
"title": ""
},
{
"docid": "187127dd1ab5f97b1158a77a25ddce91",
"text": "We introduce stochastic variational inference for Gaussian process models. This enables the application of Gaussian process (GP) models to data sets containing millions of data points. We show how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform variational inference. Our approach is readily extended to models with non-Gaussian likelihoods and latent variable models based around Gaussian processes. We demonstrate the approach on a simple toy problem and two real world data sets.",
"title": ""
},
{
"docid": "b4c8ebb06c527c81e568c82afb2d4b6d",
"text": "Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a welldefined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms.",
"title": ""
},
{
"docid": "2a422c6047bca5a997d5c3d0ee080437",
"text": "Connecting mathematical logic and computation, it ensures that some aspects of programming are absolute.",
"title": ""
},
{
"docid": "626e4d90b16a4e874c391d79b3ec39fe",
"text": "We propose novel neural temporal models for predicting and synthesizing human motion, achieving state-of-theart in modeling long-term motion trajectories while being competitive with prior work in short-term prediction, with significantly less required computation. Key aspects of our proposed system include: 1) a novel, two-level processing architecture that aids in generating planned trajectories, 2) a simple set of easily computable features that integrate derivative information into the model, and 3) a novel multi-objective loss function that helps the model to slowly progress from the simpler task of next-step prediction to the harder task of multi-step closed-loop prediction. Our results demonstrate that these innovations facilitate improved modeling of long-term motion trajectories. Finally, we propose a novel metric, called Normalized Power Spectrum Similarity (NPSS), to evaluate the long-term predictive ability of motion synthesis models, complementing the popular mean-squared error (MSE) measure of the Euler joint angles over time. We conduct a user study to determine if the proposed NPSS correlates with human evaluation of longterm motion more strongly than MSE and find that it indeed does.",
"title": ""
}
] |
scidocsrr
|
eb47e0953346f2a60fb0486508773e87
|
Mobile Cloud Computing: A Comparison of Application Models
|
[
{
"docid": "31f838fb0c7db7e8b58fb1788d5554c8",
"text": "Today’s smartphones operate independently of each other, using only local computing, sensing, networking, and storage capabilities and functions provided by remote Internet services. It is generally difficult or expensive for one smartphone to share data and computing resources with another. Data is shared through centralized services, requiring expensive uploads and downloads that strain wireless data networks. Collaborative computing is only achieved using ad hoc approaches. Coordinating smartphone data and computing would allow mobile applications to utilize the capabilities of an entire smartphone cloud while avoiding global network bottlenecks. In many cases, processing mobile data in-place and transferring it directly between smartphones would be more efficient and less susceptible to network limitations than offloading data and processing to remote servers. We have developed Hyrax, a platform derived from Hadoop that supports cloud computing on Android smartphones. Hyrax allows client applications to conveniently utilize data and execute computing jobs on networks of smartphones and heterogeneous networks of phones and servers. By scaling with the number of devices and tolerating node departure, Hyrax allows applications to use distributed resources abstractly, oblivious to the physical nature of the cloud. The design and implementation of Hyrax is described, including experiences in porting Hadoop to the Android platform and the design of mobilespecific customizations. The scalability of Hyrax is evaluated experimentally and compared to that of Hadoop. Although the performance of Hyrax is poor for CPU-bound tasks, it is shown to tolerate node-departure and offer reasonable performance in data sharing. A distributed multimedia search and sharing application is implemented to qualitatively evaluate Hyrax from an application development perspective.",
"title": ""
}
] |
[
{
"docid": "a52ae731397db5fb56bf6b65882ccc77",
"text": "This paper presents a class@cation of intrusions with respect to technique as well as to result. The taxonomy is intended to be a step on the road to an established taxonomy of intrusions for use in incident reporting, statistics, warning bulletins, intrusion detection systems etc. Unlike previous schemes, it takes the viewpoint of the system owner and should therefore be suitable to a wider community than that of system developers and vendors only. It is based on data from a tzalistic intrusion experiment, a fact that supports the practical applicability of the scheme. The paper also discusses general aspects of classification, and introduces a concept called dimension. After having made a broad survey of previous work in thejield, we decided to base our classification of intrusion techniques on a scheme proposed by Neumann and Parker in I989 and to further refine relevant parts of their scheme. Our classification of intrusion results is derived from the traditional three aspects of computer security: confidentiality, availability and integrity.",
"title": ""
},
{
"docid": "7c1be047bbb4fe3f988aaccfd0add70f",
"text": "We reviewed scientific literature pertaining to known and putative disease agents associated with the lone star tick, Amblyomma americanum. Reports in the literature concerning the role of the lone star tick in the transmission of pathogens of human and animal diseases have sometimes been unclear and even contradictory. This overview has indicated that A. americanum is involved in the ecology of several disease agents of humans and other animals, and the role of this tick as a vector of these diseases ranges from incidental to significant. Probably the clearest relationship is that of Ehrlichia chaffeensis and A. americanum. Also, there is a definite association between A. americanum and tularemia, as well as between the lone star tick and Theileria cervi to white-tailed deer. Evidence of Babesia cervi (= odocoilei) being transmitted to deer by A. americanum is largely circumstantial at this time. The role of A. americanum in cases of southern tick-associated rash illness (STARI) is currently a subject of intensive investigations with important implications. The lone star tick has been historically reported to be a vector of Rocky Mountain spotted fever rickettsiae, but current opinions are to the contrary. Evidence incriminated A. americanum as the vector of Bullis fever in the 1940s, but the disease apparently has disappeared. Q fever virus has been found in unfed A. americanum, but the vector potential, if any, is poorly understood at this time. Typhus fever and toxoplasmosis have been studied in the lone star tick, and several non-pathogenic organisms have been recovered. Implications of these tick-disease relationships are discussed.",
"title": ""
},
{
"docid": "e3e8ef3239fb6a7565a177cbceb1bee8",
"text": "A large number of studies analyse object detection and pose estimation at visual level in 2D, discussing the effects of challenges such as occlusion, clutter, texture, etc., on the performances of the methods, which work in the context of RGB modality. Interpreting the depth data, the study in this paper presents thorough multi-modal analyses. It discusses the above-mentioned challenges for full 6D object pose estimation in RGB-D images comparing the performances of several 6D detectors in order to answer the following questions: What is the current position of the computer vision community for maintaining “automation” in robotic manipulation? What next steps should the community take for improving “autonomy” in robotics while handling objects? Direct comparison of the detectors is difficult, since they are tested on multiple datasets with different characteristics and are evaluated using widely varying evaluation protocols. To deal with these issues, we follow a threefold strategy: five representative object datasets, mainly differing from the point of challenges that they involve, are collected. Then, two classes of detectors are tested on the collected datasets. Lastly, the baselines’ performances are evaluated using two different evaluation metrics under uniform scoring criteria. Regarding the experiments conducted, we analyse our observations on the baselines along with the challenges involved in the interested datasets, and we suggest a number of insights for the next steps to be taken, for improving the autonomy in robotics.",
"title": ""
},
{
"docid": "9eb0d79f9c13f30f53fb7214b337880d",
"text": "Many real world problems can be solved with Artificial Neural Networks in the areas of pattern recognition, signal processing and medical diagnosis. Most of the medical data set is seldom complete. Artificial Neural Networks require complete set of data for an accurate classification. This paper dwells on the various missing value techniques to improve the classification accuracy. The proposed system also investigates the impact on preprocessing during the classification. A classifier was applied to Pima Indian Diabetes Dataset and the results were improved tremendously when using certain combination of preprocessing techniques. The experimental system achieves an excellent classification accuracy of 99% which is best than before.",
"title": ""
},
{
"docid": "7de29b042513aaf1a3b12e71bee6a338",
"text": "The widespread use of deception in online sources has motivated the need for methods to automatically profile and identify deceivers. This work explores deception, gender and age detection in short texts using a machine learning approach. First, we collect a new open domain deception dataset also containing demographic data such as gender and age. Second, we extract feature sets including n-grams, shallow and deep syntactic features, semantic features, and syntactic complexity and readability metrics. Third, we build classifiers that aim to predict deception, gender, and age. Our findings show that while deception detection can be performed in short texts even in the absence of a predetermined domain, gender and age prediction in deceptive texts is a challenging task. We further explore the linguistic differences in deceptive content that relate to deceivers gender and age and find evidence that both age and gender play an important role in people’s word choices when fabricating lies.",
"title": ""
},
{
"docid": "64de73be55c4b594934b0d1bd6f47183",
"text": "Smart grid has emerged as the next-generation power grid via the convergence of power system engineering and information and communication technology. In this article, we describe smart grid goals and tactics, and present a threelayer smart grid network architecture. Following a brief discussion about major challenges in smart grid development, we elaborate on smart grid cyber security issues. We define a taxonomy of basic cyber attacks, upon which sophisticated attack behaviors may be built. We then introduce fundamental security techniques, whose integration is essential for achieving full protection against existing and future sophisticated security attacks. By discussing some interesting open problems, we finally expect to trigger more research efforts in this emerging area.",
"title": ""
},
{
"docid": "93afb696fa395a7f7c2a4f3fc2ac690d",
"text": "We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that contextdependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.",
"title": ""
},
{
"docid": "f7d30db4b04b33676d386953aebf503c",
"text": "Microvascular free flap transfer currently represents one of the most popular methods for mandibularreconstruction. With the various free flap options nowavailable, there is a general consensus that no single kindof osseous or osteocutaneous flap can resolve the entire spectrum of mandibular defects. A suitable flap, therefore, should be selected according to the specific type of bone and soft tissue defect. We have developed an algorithm for mandibular reconstruction, in which the bony defect is termed as either “lateral” or “anterior” and the soft-tissue defect is classified as “none,” “skin or mucosal,” or “through-and-through.” For proper flap selection, the bony defect condition should be considered first, followed by the soft-tissue defect condition. When the bony defect is “lateral” and the soft tissue is not defective, the ilium is the best choice. When the bony defect is “lateral” and a small “skin or mucosal” soft-tissue defect is present, the fibula represents the optimal choice. When the bony defect is “lateral” and an extensive “skin or mucosal” or “through-and-through” soft-tissue defect exists, the scapula should be selected. When the bony defect is “anterior,” the fibula should always be selected. However, when an “anterior” bone defect also displays an “extensive” or “through-and-through” soft-tissue defect, the fibula should be usedwith other soft-tissue flaps. Flaps such as a forearm flap, anterior thigh flap, or rectus abdominis musculocutaneous flap are suitable, depending on the size of the soft-tissue defect.",
"title": ""
},
{
"docid": "130efef512294d14094a900693efebfd",
"text": "Metaphor comprehension involves an interaction between the meaning of the topic and the vehicle terms of the metaphor. Meaning is represented by vectors in a high-dimensional semantic space. Predication modifies the topic vector by merging it with selected features of the vehicle vector. The resulting metaphor vector can be evaluated by comparing it with known landmarks in the semantic space. Thus, metaphorical prediction is treated in the present model in exactly the same way as literal predication. Some experimental results concerning metaphor comprehension are simulated within this framework, such as the nonreversibility of metaphors, priming of metaphors with literal statements, and priming of literal statements with metaphors.",
"title": ""
},
{
"docid": "3c8ac7bd31d133b4d43c0d3a0f08e842",
"text": "How we teach and learn is undergoing a revolution, due to changes in technology and connectivity. Education may be one of the best application areas for advanced NLP techniques, and NLP researchers have much to contribute to this problem, especially in the areas of learning to write, mastery learning, and peer learning. In this paper I consider what happens when we convert natural language processors into natural language coaches. 1 Why Should You Care, NLP Researcher? There is a revolution in learning underway. Students are taking Massive Open Online Courses as well as online tutorials and paid online courses. Technology and connectivity makes it possible for students to learn from anywhere in the world, at any time, to fit their schedules. And in today’s knowledge-based economy, going to school only in one’s early years is no longer enough; in future most people are going to need continuous, lifelong education. Students are changing too — they expect to interact with information and technology. Fortunately, pedagogical research shows significant benefits of active learning over passive methods. The modern view of teaching means students work actively in class, talk with peers, and are coached more than graded by their instructors. In this new world of education, there is a great need for NLP research to step in and help. I hope in this paper to excite colleagues about the possibilities and suggest a few new ways of looking at them. I do not attempt to cover the field of language and learning comprehensively, nor do I claim there is no work in the field. In fact there is quite a bit, such as a recent special issue on language learning resources (Sharoff et al., 2014), the long running ACL workshops on Building Educational Applications using NLP (Tetreault et al., 2015), and a recent shared task competition on grammatical error detection for second language learners (Ng et al., 2014). But I hope I am casting a few interesting thoughts in this direction for those colleagues who are not focused on this particular topic.",
"title": ""
},
{
"docid": "34913781debe37f36befc853d57eba0c",
"text": "Michael R. Benjamin Naval Undersea Warfare Center, Newport, Rhode Island 02841, and Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 e-mail: [email protected] Henrik Schmidt Department of Mechanical Engineering, Laboratory for Autonomous Marine Sensing Systems, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 e-mail: [email protected] Paul M. Newman Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, United Kingdom e-mail: [email protected] John J. Leonard Department of Mechanical Engineering, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 e-mail: [email protected]",
"title": ""
},
{
"docid": "a9e30e02bcbac0f117820d21bf9941da",
"text": "The question of how identity is affected when diagnosed with dementia is explored in this capstone thesis. With the rise of dementia diagnoses (Goldstein-Levitas, 2016) there is a need for understanding effective approaches to care as emotional components remain intact. The literature highlights the essence of personhood and how person-centered care (PCC) is essential to preventing isolation and impacting a sense of self and well-being (Killick, 2004). Meeting spiritual needs in the sense of hope and purpose may also improve quality of life and delay symptoms. Dance/movement therapy (DMT) is specifically highlighted as an effective approach as sessions incorporate the components to physically, emotionally, and spiritually stimulate the individual with dementia. A DMT intervention was developed and implemented at an assisted living facility in the Boston area within a specific unit dedicated to the care of residents who had a primary diagnosis of mild to severe dementia. A Chacian framework is used with sensory stimulation techniques to address physiological needs. Results indicated positive experiences from observations and merited the need to conduct more research to credit DMT’s effectiveness with geriatric populations.",
"title": ""
},
{
"docid": "e171be9168fc94527980e767742555d3",
"text": "OBJECTIVE\nRelatively minor abusive injuries can precede severe physical abuse in infants. Our objective was to determine how often abused infants have a previous history of \"sentinel\" injuries, compared with infants who were not abused.\n\n\nMETHODS\nCase-control, retrospective study of 401, <12-month-old infants evaluated for abuse in a hospital-based setting and found to have definite, intermediate concern for, or no abuse after evaluation by the hospital-based Child Protection Team. A sentinel injury was defined as a previous injury reported in the medical history that was suspicious for abuse because the infant could not cruise, or the explanation was implausible.\n\n\nRESULTS\nOf the 200 definitely abused infants, 27.5% had a previous sentinel injury compared with 8% of the 100 infants with intermediate concern for abuse (odds ratio: 4.4, 95% confidence interval: 2.0-9.6; P < .001). None of the 101 nonabused infants (controls) had a previous sentinel injury (P < .001). The type of sentinel injury in the definitely abused cohort was bruising (80%), intraoral injury (11%), and other injury (7%). Sentinel injuries occurred in early infancy: 66% at <3 months of age and 95% at or before the age of 7 months. Medical providers were reportedly aware of the sentinel injury in 41.9% of cases.\n\n\nCONCLUSIONS\nPrevious sentinel injuries are common in infants with severe physical abuse and rare in infants evaluated for abuse and found to not be abused. Detection of sentinel injuries with appropriate interventions could prevent many cases of abuse.",
"title": ""
},
{
"docid": "a753be5a5f81ae77bfcb997a2748d723",
"text": "The design of electromagnetic (EM) interference filters for converter systems is usually based on measurements with a prototype during the final stages of the design process. Predicting the conducted EM noise spectrum of a converter by simulation in an early stage has the potential to save time/cost and to investigate different noise reduction methods, which could, for example, influence the layout or the design of the control integrated circuit. Therefore, the main sources of conducted differential-mode (DM) and common-mode (CM) noise of electronic ballasts for fluorescent lamps are identified in this paper. For each source, the noise spectrum is calculated and a noise propagation model is presented. The influence of the line impedance stabilizing network (LISN) and the test receiver is also included. Based on the presented models, noise spectrums are calculated and validated by measurements.",
"title": ""
},
{
"docid": "eb5208a4793fa5c5723b20da0421af26",
"text": "High-level synthesis promises a significant shortening of the FPGA design cycle when compared with design entry using register transfer level (RTL) languages. Recent evaluations report that C-to-RTL flows can produce results with a quality close to hand-crafted designs [1]. Algorithms which use dynamic, pointer-based data structures, which are common in software, remain difficult to implement well. In this paper, we describe a comparative case study using Xilinx Vivado HLS as an exemplary state-of-the-art high-level synthesis tool. Our test cases are two alternative algorithms for the same compute-intensive machine learning technique (clustering) with significantly different computational properties. We compare a data-flow centric implementation to a recursive tree traversal implementation which incorporates complex data-dependent control flow and makes use of pointer-linked data structures and dynamic memory allocation. The outcome of this case study is twofold: We confirm similar performance between the hand-written and automatically generated RTL designs for the first test case. The second case reveals a degradation in latency by a factor greater than 30× if the source code is not altered prior to high-level synthesis. We identify the reasons for this shortcoming and present code transformations that narrow the performance gap to a factor of four. We generalise our source-to-source transformations whose automation motivates research directions to improve high-level synthesis of dynamic data structures in the future.",
"title": ""
},
{
"docid": "39d4375dd9b8353241482bff577ee812",
"text": "Cellulose constitutes the most abundant renewable polymer resource available today. As a chemical raw material, it is generally well-known that it has been used in the form of fibers or derivatives for nearly 150 years for a wide spectrum of products and materials in daily life. What has not been known until relatively recently is that when cellulose fibers are subjected to acid hydrolysis, the fibers yield defect-free, rod-like crystalline residues. Cellulose nanocrystals (CNs) have garnered in the materials community a tremendous level of attention that does not appear to be relenting. These biopolymeric assemblies warrant such attention not only because of their unsurpassed quintessential physical and chemical properties (as will become evident in the review) but also because of their inherent renewability and sustainability in addition to their abundance. They have been the subject of a wide array of research efforts as reinforcing agents in nanocomposites due to their low cost, availability, renewability, light weight, nanoscale dimension, and unique morphology. Indeed, CNs are the fundamental constitutive polymeric motifs of macroscopic cellulosic-based fibers whose sheer volume dwarfs any known natural or synthetic biomaterial. Biopolymers such as cellulose and lignin and † North Carolina State University. ‡ Helsinki University of Technology. Dr. Youssef Habibi is a research assistant professor at the Department of Forest Biomaterials at North Carolina State University. He received his Ph.D. in 2004 in organic chemistry from Joseph Fourier University (Grenoble, France) jointly with CERMAV (Centre de Recherche sur les Macromolécules Végétales) and Cadi Ayyad University (Marrakesh, Morocco). During his Ph.D., he worked on the structural characterization of cell wall polysaccharides and also performed surface chemical modification, mainly TEMPO-mediated oxidation, of crystalline polysaccharides, as well as their nanocrystals. Prior to joining NCSU, he worked as assistant professor at the French Engineering School of Paper, Printing and Biomaterials (PAGORA, Grenoble Institute of Technology, France) on the development of biodegradable nanocomposites based on nanocrystalline polysaccharides. He also spent two years as postdoctoral fellow at the French Institute for Agricultural Research, INRA, where he developed new nanostructured thin films based on cellulose nanowiskers. Dr. Habibi’s research interests include the sustainable production of materials from biomass, development of high performance nanocomposites from lignocellulosic materials, biomass conversion technologies, and the application of novel analytical tools in biomass research. Chem. Rev. 2010, 110, 3479–3500 3479",
"title": ""
},
{
"docid": "77af12d87cd5827f35d92968d1888162",
"text": "Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"title": ""
},
{
"docid": "46b5f32b9f08dd5d1fbe2d6c2fe532ee",
"text": "As more recombinant human proteins become available on the market, the incidence of immunogenicity problems is rising. The antibodies formed against a therapeutic protein can result in serious clinical effects, such as loss of efficacy and neutralization of the endogenous protein with essential biological functions. Here we review the literature on the relations between the immunogenicity of the therapeutic proteins and their structural properties. The mechanisms by which protein therapeutics can induce antibodies as well as the models used to study immunogenicity are discussed. Examples of how the chemical structure (including amino acid sequence, glycosylation, and pegylation) can influence the incidence and level of antibody formation are given. Moreover, it is shown that physical degradation (especially aggregation) of the proteins as well as chemical decomposition (e.g., oxidation) may enhance the immune response. To what extent the presence of degradation products in protein formulations influences their immunogenicity still needs further investigation. Immunization of transgenic animals, tolerant for the human protein, with well-defined, artificially prepared degradation products of therapeutic proteins may shed more light on the structure-immunogenicity relationships of recombinant human proteins.",
"title": ""
},
{
"docid": "22cb0a390087efcb9fa2048c74e9845f",
"text": "This paper describes the early conception and latest developments of electroactive polymer (EAP)-based sensors, actuators, electronic components, and power sources, implemented as wearable devices for smart electronic textiles (e-textiles). Such textiles, functioning as multifunctional wearable human interfaces, are today considered relevant promoters of progress and useful tools in several biomedical fields, such as biomonitoring, rehabilitation, and telemedicine. After a brief outline on ongoing research and the first products on e-textiles under commercial development, this paper presents the most highly performing EAP-based devices developed by our lab and other research groups for sensing, actuation, electronics, and energy generation/storage, with reference to their already demonstrated or potential applicability to electronic textiles",
"title": ""
},
{
"docid": "d6e093ecc3325fcdd2e29b0b961b9b21",
"text": "[Context and motivation] Natural language is the main representation means of industrial requirements documents, which implies that requirements documents are inherently ambiguous. There exist guidelines for ambiguity detection, such as the Ambiguity Handbook [1]. In order to detect ambiguities according to the existing guidelines, it is necessary to train analysts. [Question/problem] Although ambiguity detection guidelines were extensively discussed in literature, ambiguity detection has not been automated yet. Automation of ambiguity detection is one of the goals of the presented paper. More precisely, the approach and tool presented in this paper have three goals: (1) to automate ambiguity detection, (2) to make plausible for the analyst that ambiguities detected by the tool represent genuine problems of the analyzed document, and (3) to educate the analyst by explaining the sources of the detected ambiguities. [Principal ideas/results] The presented tool provides reliable ambiguity detection, in the sense that it detects four times as many genuine ambiguities as than an average human analyst. Furthermore, the tool offers high precision ambiguity detection and does not present too many false positives to the human analyst. [Contribution] The presented tool is able both to detect the ambiguities and to explain ambiguity sources. Thus, besides pure ambiguity detection, it can be used to educate analysts, too. Furthermore, it provides a significant potential for considerable time and cost savings and at the same time quality improvements in the industrial requirements engineering.",
"title": ""
}
] |
scidocsrr
|
661c94e1afa6ce0abebe959556284d31
|
An information theoretic approach for extracting and tracing non-functional requirements
|
[
{
"docid": "221cd488d735c194e07722b1d9b3ee2a",
"text": "HURTS HELPS HURTS HELPS Data Type [Target System] Implicit HELPS HURTS HURTS BREAKS ? Invocation [Target System] Pipe & HELPS BREAKS BREAKS HELPS Filter WHEN [Target condl System] condl: size of data in domain is huge Figure 13.4. A generic Correlation Catalogue, based on [Garlan93]. Figure 13.3 shows a method which decomposes the topic on process, including algorithms as used in [Garlan93]. Decomposition methods for processes are also described in [Nixon93, 94a, 97a], drawing on implementations of processes [Chung84, 88]. These two method definitions are unparameterized. A fuller catalogue would include parameterized definitions too. Operationalization methods, which organize knowledge about satisficing NFR softgoals, are embedded in architectural designs when selected. For example, an ImplicitFunctionlnvocationRegime (based on [Garlan93]' architecture 3) can be used to hide implementation details in order to make an architectural 358 NON-FUNCTIONAL REQUIREMENTS IN SOFTWARE ENGINEERING design more extensible, thus contributing to one of the softgoals in the above decomposition. Argumentation methods and templates are used to organize principles and guidelines for making design rationale for or against design decisions (Cf. [J. Lee91]).",
"title": ""
},
{
"docid": "d95ee6cd088919de0df4087f5413eda5",
"text": "Wikipedia provides a knowledge base for computing word relatedness in a more structured fashion than a search engine and with more coverage than WordNet. In this work we present experiments on using Wikipedia for computing semantic relatedness and compare it to WordNet on various benchmarking datasets. Existing relatedness measures perform better using Wikipedia than a baseline given by Google counts, and we show that Wikipedia outperforms WordNet when applied to the largest available dataset designed for that purpose. The best results on this dataset are obtained by integrating Google, WordNet and Wikipedia based measures. We also show that including Wikipedia improves the performance of an NLP application processing naturally occurring texts.",
"title": ""
}
] |
[
{
"docid": "244c79d374bdbe44406fc514610e4ee7",
"text": "This article surveys some theoretical aspects of cellular automata CA research. In particular, we discuss classical and new results on reversibility, conservation laws, limit sets, decidability questions, universality and topological dynamics of CA. The selection of topics is by no means comprehensive and reflects the research interests of the author. The main goal is to provide a tutorial of CA theory to researchers in other branches of natural computing, to give a compact collection of known results with references to their proofs, and to suggest some open problems. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "356dbb5e8e576cfa49153962a6e3be93",
"text": "Knowing how many people occupy a building, and where they are located, is a key component of smart building services. Commercial, industrial and residential buildings often incorporate systems used to determine occupancy. However, relatively simple sensor technology and control algorithms limit the effectiveness of smart building services. In this paper we propose to replace sensor technology with time series models that can predict the number of occupants at a given location and time. We use Wi-Fi datasets readily available in abundance for smart building services and train Auto Regression Integrating Moving Average (ARIMA) models and Long Short-Term Memory (LSTM) time series models. As a use case scenario of smart building services, these models allow forecasting of the number of people at a given time and location in 15, 30 and 60 minutes time intervals at building as well as Access Point (AP) level. For LSTM, we build our models in two ways: a separate model for every time scale, and a combined model for the three time scales. Our experiments show that LSTM combined model reduced the computational resources with respect to the number of neurons by 74.48 % for the AP level, and by 67.13 % for the building level. Further, the root mean square error (RMSE) was reduced by 88.2%–93.4% for LSTM in comparison to ARIMA for the building levels models and by 80.9 %–87% for the AP level models.",
"title": ""
},
{
"docid": "cd068158b6bebadfb8242b6412ec5bbb",
"text": "artefacts, 65–67 built environments and, 67–69 object artefacts, 65–66 structuralism and, 66–67 See also Non–discursive technique Asymmetry, 88–89, 91 Asynchronous systems, 187 Autonomous architecture, 336–338",
"title": ""
},
{
"docid": "fb162c94248297f35825ff1022ad2c59",
"text": "This article traces the evolution of ambulance location and relocation models proposed over the past 30 years. The models are classified in two main categories. Deterministic models are used at the planning stage and ignore stochastic considerations regarding the availability of ambulances. Probabilistic models reflect the fact that ambulances operate as servers in a queueing system and cannot always answer a call. In addition, dynamic models have been developed to repeatedly relocate ambulances throughout the day. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "6c2a033b374b4318cd94f0a617ec705a",
"text": "In this paper, we propose to use Deep Neural Net (DNN), which has been recently shown to reduce speech recognition errors significantly, in Computer-Aided Language Learning (CALL) to evaluate English learners’ pronunciations. Multi-layer, stacked Restricted Boltzman Machines (RBMs), are first trained as nonlinear basis functions to represent speech signals succinctly, and the output layer is discriminatively trained to optimize the posterior probabilities of correct, sub-phonemic “senone” states. Three Goodness of Pronunciation (GOP) scores, including: the likelihood-based posterior probability, averaged framelevel posteriors of the DNN output layer “senone” nodes, and log likelihood ratio of correct and competing models, are tested with recordings of both native and non-native speakers, along with manual grading of pronunciation quality. The experimental results show that the GOP estimated by averaged frame-level posteriors of “senones” correlate with human scores the best. Comparing with GOPs estimated with non-DNN, i.e. GMMHMM, based models, the new approach can improve the correlations relatively by 22.0% or 15.6%, at word or sentence levels, respectively. In addition, the frame-level posteriors, which doesn’t need a decoding lattice and its corresponding forwardbackward computations, is suitable for supporting fast, on-line, multi-channel applications.",
"title": ""
},
{
"docid": "024e9600707203ffcf35ca96dff42a87",
"text": "The blockchain technology is gaining momentum because of its possible application to other systems than the cryptocurrency one. Indeed, blockchain, as a de-centralized system based on a distributed digital ledger, can be utilized to securely manage any kind of assets, constructing a system that is independent of any authorization entity. In this paper, we briefly present blockchain and our work in progress, the VMOA blockchain, to secure virtual machine orchestration operations for cloud computing and network functions virtualization systems. Using tutorial examples, we describe our design choices and draw implementation plans.",
"title": ""
},
{
"docid": "92a112d7b6f668ece433e62a7fe4054c",
"text": "A new technique for stabilizing nonholonomic systems to trajectories is presented. It is well known (see [2]) that such systems cannot be stabilized to a point using smooth static-state feedback. In this note, we suggest the use of control laws for stabilizing a system about a trajectory, instead of a point. Given a nonlinear system and a desired (nominal) feasible trajectory, the note gives an explicit control law which will locally exponentially stabilize the system to the desired trajectory. The theory is applied to several examples, including a car-like robot.",
"title": ""
},
{
"docid": "26508379e41da5e3b38dd944fc9e4783",
"text": "We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on annotations. Direct search on image content is made possible by use of semantics-preserving image compression, which reduces images to a small set of perceptually-significant coefficients. We describe three Photobook tools in particular: one that allows search based on grey-level appearance, one that uses 2-D shape, and a third that allows search based on textural properties.",
"title": ""
},
{
"docid": "b987f831f4174ad5d06882040769b1ac",
"text": "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. 1 Summary Application trends, device technologies and the architecture of systems drive progress in information technologies. However,",
"title": ""
},
{
"docid": "9827fa3952b7ba4e5e777793cc241148",
"text": "We address the problem of segmenting a sequence of images of natural scenes into disjoint regions that are characterized by constant spatio-temporal statistics. We model the spatio-temporal dynamics in each region by Gauss-Markov models, and infer the model parameters as well as the boundary of the regions in a variational optimization framework. Numerical results demonstrate that – in contrast to purely texture-based segmentation schemes – our method is effective in segmenting regions that differ in their dynamics even when spatial statistics are identical.",
"title": ""
},
{
"docid": "cfd0cadbdf58ee01095aea668f0da4fe",
"text": "A unique and miniaturized dual-band coplanar waveguide (CPW)-fed antenna is presented. The proposed antenna comprises a rectangular patch that is surrounded by upper and lower ground-plane sections that are interconnected by a high-impedance microstrip line. The proposed antenna structure generates two separate impedance bandwidths to cover frequency bands of GSM and Wi-Fi/WLAN. The antenna realized is relatively small in size $(17\\times 20\\ {\\hbox{mm}}^{2})$ and operates over frequency ranges 1.60–1.85 and 4.95–5.80 GHz, making it suitable for GSM and Wi-Fi/WLAN applications. In addition, the antenna is circularly polarized in the GSM band. Experimental results show the antenna exhibits monopole-like radiation characteristics and a good antenna gain over its operating bands. The measured and simulated results presented show good agreement.",
"title": ""
},
{
"docid": "f782af034ef46a15d89637a43ad2849c",
"text": "Introduction: Evidence-based treatment of abdominal hernias involves the use of prosthetic mesh. However, the most commonly used method of treatment of diastasis of the recti involves plication with non-absorbable sutures as part of an abdominoplasty procedure. This case report describes single-port laparoscopic repair of diastasis of recti and umbilical hernia with prosthetic mesh after plication with slowly absorbable sutures combined with abdominoplasty. Technique Description: Our patient is a 36-year-old woman with severe diastasis of the recti, umbilical hernia and an excessive amount of redundant skin after two previous pregnancies and caesarean sections. After raising the upper abdominal flap, a single-port was placed in the left upper quadrant and the ligamenturn teres was divided. The diastasis of the recti and umbilical hernia were plicated under direct vision with continuous and interrupted slowly absorbable sutures before an antiadhesive mesh was placed behind the repair with 6 cm overlap, transfixed in 4 quadrants and tacked in place with non-absorbable tacks in a double-crown technique. The left upper quadrant wound was closed with slowly absorbable sutures. The excess skin was removed and fibrin sealant was sprayed in the subcutaneous space to minimize the risk of serorna formation without using drains. Discussion: Combining single-port laparoscopic repair of diastasis of recti and umbilical hemia repair minimizes inadvertent suturing of abdominal contents during plication, the risks of port site hernias associated with conventional multipart repair and permanently reinforced the midline weakness while achieving “scarless” surgery.",
"title": ""
},
{
"docid": "6e67329e4f678ae9dc04395ae0a5b832",
"text": "This review covers recent developments in the social influence literature, focusing primarily on compliance and conformity research published between 1997 and 2002. The principles and processes underlying a target's susceptibility to outside influences are considered in light of three goals fundamental to rewarding human functioning. Specifically, targets are motivated to form accurate perceptions of reality and react accordingly, to develop and preserve meaningful social relationships, and to maintain a favorable self-concept. Consistent with the current movement in compliance and conformity research, this review emphasizes the ways in which these goals interact with external forces to engender social influence processes that are subtle, indirect, and outside of awareness.",
"title": ""
},
{
"docid": "2d59fe09633ee41c60e9e951986e56a6",
"text": "Face alignment and 3D face reconstruction are traditionally accomplished as separated tasks. By exploring the strong correlation between 2D landmarks and 3D shapes, in contrast, we propose a joint face alignment and 3D face reconstruction method to simultaneously solve these two problems for 2D face images of arbitrary poses and expressions. This method, based on a summation model of 3D face shapes and cascaded regression in 2D and 3D face shape spaces, iteratively and alternately applies two cascaded regressors, one for updating 2D landmarks and the other for 3D face shape. The 3D face shape and the landmarks are correlated via a 3D-to-2D mapping matrix. Unlike existing methods, the proposed method can fully automatically generate both pose-and-expression-normalized (PEN) and expressive 3D face shapes and localize both visible and invisible 2D landmarks. Based on the PEN 3D face shapes, we devise a method to enhance face recognition accuracy across poses and expressions. Both linear and nonlinear implementations of the proposed method are presented and evaluated in this paper. Extensive experiments show that the proposed method can achieve the state-of-the-art accuracy in both face alignment and 3D face reconstruction, and benefit face recognition owing to its reconstructed PEN 3D face shapes.",
"title": ""
},
{
"docid": "b12f1b1ff7618c1f54462c18c768dae8",
"text": "Retrieval is the key process for understanding learning and for promoting learning, yet retrieval is not often granted the central role it deserves. Learning is typically identified with the encoding or construction of knowledge, and retrieval is considered merely the assessment of learning that occurred in a prior experience. The retrieval-based learning perspective outlined here is grounded in the fact that all expressions of knowledge involve retrieval and depend on the retrieval cues available in a given context. Further, every time a person retrieves knowledge, that knowledge is changed, because retrieving knowledge improves one’s ability to retrieve it again in the future. Practicing retrieval does not merely produce rote, transient learning; it produces meaningful, long-term learning. Yet retrieval practice is a tool many students lack metacognitive awareness of and do not use as often as they should. Active retrieval is an effective but undervalued strategy for promoting meaningful learning.",
"title": ""
},
{
"docid": "37fcf6201c168e87d6ef218ecb71c211",
"text": "NASA-TLX is a multi-dimensional scale designed to obtain workload estimates from one or more operators while they are performing a task or immediately afterwards. The years of research that preceded subscale selection and the weighted averaging approach resulted in a tool that has proven to be reasonably easy to use and reliably sensitive to experimentally important manipulations over the past 20 years. Its use has spread far beyond its original application (aviation), focus (crew complement), and language (English). This survey of 550 studies in which NASA-TLX was used or reviewed was undertaken to provide a resource for a new generation of users. The goal was to summarize the environments in which it has been applied, the types of activities the raters performed, other variables that were measured that did (or did not) covary, methodological issues, and lessons learned",
"title": ""
},
{
"docid": "bb8ca605a714d71be903d46bf6e1fa40",
"text": "Several methods have been proposed for automatic and objective monitoring of food intake, but their performance suffers in the presence of speech and motion artifacts. This paper presents a novel sensor system and algorithms for detection and characterization of chewing bouts from a piezoelectric strain sensor placed on the temporalis muscle. The proposed data acquisition device was incorporated into the temple of eyeglasses. The system was tested by ten participants in two part experiments, one under controlled laboratory conditions and the other in unrestricted free-living. The proposed food intake recognition method first performed an energy-based segmentation to isolate candidate chewing segments (instead of using epochs of fixed duration commonly reported in research literature), with the subsequent classification of the segments by linear support vector machine models. On participant level (combining data from both laboratory and free-living experiments), with ten-fold leave-one-out cross-validation, chewing were recognized with average F-score of 96.28% and the resultant area under the curve was 0.97, which are higher than any of the previously reported results. A multivariate regression model was used to estimate chew counts from segments classified as chewing with an average mean absolute error of 3.83% on participant level. These results suggest that the proposed system is able to identify chewing segments in the presence of speech and motion artifacts, as well as automatically and accurately quantify chewing behavior, both under controlled laboratory conditions and unrestricted free-living.",
"title": ""
},
{
"docid": "901debd94cb5749a9a1f06b0fd0cb155",
"text": "• Business process reengineering-the redesign of an organization's business processes to make them more efficient. • Coordination technology-an aid to managing dependencies among the agents within a business process, and provides automated support for the most routinized component processes. * Process-driven software development environments-an automated system for integrating the work of all software related management and staff; it provides embedded support for an orderly and defined software development process. These three applications share a growing requirement to represent the processes through which work is accomplished. To the extent that automation is involved, process representation becomes a vital issue in redesigning work and allocating responsibilities between humans and computers. This requirement reflects the growing use of distributed , networked systems to link the interacting agents responsible for executing a business process. To establish process modeling as a unique area, researchers must identify conceptual boundaries that distinguish their work from model-ing in other areas of information science. Process modeling is distinguished from other types of model-ing in computer science because many of the phenomena modeled must be enacted by a human rather than a machine. At least some mod-eling, however, in the area of human-machine system integration or information systems design has this 'human-executable' attribute. Rather than focusing solely on the user's behavior at the interface or the flow and transformation of data within the system, process model-ing also focuses on interacting behaviors among agents, regardless of whether a computer is involved in the transactions. Much of the research on process modeling has been conducted on software development organizations , since the software engineering community is already accustomed to formal modeling. Software process modeling, in particular , explicitly focuses on phenomena that occur during software creation and evolution, a domain different from that usually mod-eled in human-machine integration or information systems design. Software development is a challenging focus for process modeling because of the creative problem-solving involved in requirements analysis and design, and the coordination of team interactions during the development of a complex intellectual artifact. In this article, software process modeling will be used as an example application for describing the current status of process modeling, issues for practical use, and the research questions that remain ahead. Most software organizations possess several yards of software life cycle description, enough to wrap endlessly around the walls of project rooms. Often these descriptions do not correspond to the processes actually performed during software …",
"title": ""
},
{
"docid": "333e2df79425177f0ce2686bd5edbfbe",
"text": "The current paper proposes a novel variational Bayes predictive coding RNN model, which can learn to generate fluctuated temporal patterns from exemplars. The model learns to maximize the lower bound of the weighted sum of the regularization and reconstruction error terms. We examined how this weighting can affect development of different types of information processing while learning fluctuated temporal patterns. Simulation results show that strong weighting of the reconstruction term causes the development of deterministic chaos for imitating the randomness observed in target sequences, while strong weighting of the regularization term causes the development of stochastic dynamics imitating probabilistic processes observed in targets. Moreover, results indicate that the most generalized learning emerges between these two extremes. The paper concludes with implications in terms of the underlying neuronal mechanisms for autism spectrum disorder and for free action.",
"title": ""
}
] |
scidocsrr
|
ba2d6e33064b61517dfb0593665c3c47
|
Graph Frequency Analysis of Brain Signals
|
[
{
"docid": "97490d6458ba9870ce22b3418c558c58",
"text": "The brain is expensive, incurring high material and metabolic costs for its size — relative to the size of the body — and many aspects of brain network organization can be mostly explained by a parsimonious drive to minimize these costs. However, brain networks or connectomes also have high topological efficiency, robustness, modularity and a 'rich club' of connector hubs. Many of these and other advantageous topological properties will probably entail a wiring-cost premium. We propose that brain organization is shaped by an economic trade-off between minimizing costs and allowing the emergence of adaptively valuable topological patterns of anatomical or functional connectivity between multiple neuronal populations. This process of negotiating, and re-negotiating, trade-offs between wiring cost and topological value continues over long (decades) and short (millisecond) timescales as brain networks evolve, grow and adapt to changing cognitive demands. An economical analysis of neuropsychiatric disorders highlights the vulnerability of the more costly elements of brain networks to pathological attack or abnormal development.",
"title": ""
},
{
"docid": "e94afab2ce61d7426510a5bcc88f7ca8",
"text": "Community detection is an important task in network analysis, in which we aim to learn a network partition that groups together vertices with similar community-level connectivity patterns. By finding such groups of vertices with similar structural roles, we extract a compact representation of the network’s large-scale structure, which can facilitate its scientific interpretation and the prediction of unknown or future interactions. Popular approaches, including the stochastic block model, assume edges are unweighted, which limits their utility by discarding potentially useful information. We introduce the weighted stochastic block model (WSBM), which generalizes the stochastic block model to networks with edge weights drawn from any exponential family distribution. This model learns from both the presence and weight of edges, allowing it to discover structure that would otherwise be hidden when weights are discarded or thresholded. We describe a Bayesian variational algorithm for efficiently approximating this model’s posterior distribution over latent block structures. We then evaluate the WSBM’s performance on both edge-existence and edge-weight prediction tasks for a set of real-world weighted networks. In all cases, the WSBM performs as well or better than the best alternatives on these tasks. community detection, weighted relational data, block models, exponential family, variational Bayes.",
"title": ""
}
] |
[
{
"docid": "846ae985f61a0dcdb1ff3a2226c1b41a",
"text": "OBJECTIVE\nThis article provides an overview of tactile displays. Its goal is to assist human factors practitioners in deciding when and how to employ the sense of touch for the purpose of information representation. The article also identifies important research needs in this area.\n\n\nBACKGROUND\nFirst attempts to utilize the sense of touch as a medium for communication date back to the late 1950s. For the next 35 years progress in this area was relatively slow, but recent years have seen a surge in the interest and development of tactile displays and the integration of tactile signals in multimodal interfaces. A thorough understanding of the properties of this sensory channel and its interaction with other modalities is needed to ensure the effective and robust use of tactile displays.\n\n\nMETHODS\nFirst, an overview of vibrotactile perception is provided. Next, the design of tactile displays is discussed with respect to available technologies. The potential benefit of including tactile cues in multimodal interfaces is discussed. Finally, research needs in the area of tactile information presentation are highlighted.\n\n\nRESULTS\nThis review provides human factors researchers and interface designers with the requisite knowledge for creating effective tactile interfaces. It describes both potential benefits and limitations of this approach to information presentation.\n\n\nCONCLUSION\nThe sense of touch represents a promising means of supporting communication and coordination in human-human and human-machine systems.\n\n\nAPPLICATION\nTactile interfaces can support numerous functions, including spatial orientation and guidance, attention management, and sensory substitution, in a wide range of domains.",
"title": ""
},
{
"docid": "942be0aa4dab5904139919351d6d63d4",
"text": "Since Hinton and Salakhutdinov published their landmark science paper in 2006 ending the previous neural-network winter, research in neural networks has increased dramatically. Researchers have applied neural networks seemingly successfully to various topics in the field of computer science. However, there is a risk that we overlook other methods. Therefore, we take a recent end-to-end neural-network-based work (Dhingra et al., 2018) as a starting point and contrast this work with more classical techniques. This prior work focuses on the LAMBADA word prediction task, where broad context is used to predict the last word of a sentence. It is often assumed that neural networks are good at such tasks where feature extraction is important. We show that with simpler syntactic and semantic features (e.g. Across Sentence Boundary (ASB) N-grams) a state-ofthe-art neural network can be outperformed. Our discriminative language-model-based approach improves the word prediction accuracy from 55.6% to 58.9% on the LAMBADA task. As a next step, we plan to extend this work to other language modeling tasks.",
"title": ""
},
{
"docid": "d647fc2b5635a3dfcebf7843fef3434c",
"text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.",
"title": ""
},
{
"docid": "55dbe73527f91af939e068a76d0200b7",
"text": "With an ageing population in an industrialised world, the global burden of stroke is staggering millions of strokes a year. Hemiparesis is one of the most.Lancet. Rehabilitation of hemiparesis after stroke with a mirror. Altschuler EL, Wisdom SB, Stone L, Foster C, Galasko D.Rehabilitation of the severely affected paretic arm after stroke represents a major challenge, especially in the presence of sensory impairment. Objective.in patients after stroke. This article reviews the evidence for motor imagery or.",
"title": ""
},
{
"docid": "652366f6feab8f3792c0fcb74318472d",
"text": "OBJECTIVE\nTo evaluate the prefrontal space ratio (PFSR) in second- and third-trimester euploid fetuses and fetuses with trisomy 21.\n\n\nMETHODS\nThis was a retrospective study utilizing stored mid-sagittal two-dimensional images of second- and third-trimester fetal faces that were recorded during prenatal ultrasound examinations at the Department of Prenatal Medicine at the University of Tuebingen, Germany and at a private center for prenatal medicine in Nuremberg, Germany. For the normal range, 279 euploid pregnancies between 15 and 40 weeks' gestation were included. The results were compared with 91 cases with trisomy 21 between 15 and 40 weeks. For the ratio measurement, a line was drawn between the leading edge of the mandible and the maxilla (MM line) and extended in front of the forehead. The ratio of the distance between the leading edge of the skull and the leading edge of the skin (d1) to the distance between the skin and the point where the MM line was intercepted (d2) was calculated. The PFSR was determined by dividing d2 by d1.\n\n\nRESULTS\nIn the euploid and trisomy 21 groups, the median gestational age at the time of ultrasound examination was 21.1 (range, 15.0-40.0) and 21.4 (range, 15.0-40.3) weeks, respectively. Multiple regression analysis showed that PFSR was independent of maternal and gestational age. In the euploid group, the mean PFSR was 0.97 ± 0.29. In fetuses with trisomy 21, the mean PFSR was 0.2 ± 0.38 (P < 0.0001). The PFSR was below the 5(th) centile in 14 (5.0%) euploid fetuses and in 72 (79.1%) fetuses with trisomy 21.\n\n\nCONCLUSION\nThe PFSR is a simple and effective marker in second- and third-trimester screening for trisomy 21.",
"title": ""
},
{
"docid": "3dd238bc2b51b3aaf9b8b6900fc82d12",
"text": "Nowadays many applications are generating streaming data for an example real-time surveillance, internet traffic, sensor data, health monitoring systems, communication networks, online transactions in the financial market and so on. Data Streams are temporally ordered, fast changing, massive, and potentially infinite sequence of data. Data Stream mining is a very challenging problem. This is due to the fact that data streams are of tremendous volume and flows at very high speed which makes it impossible to store and scan streaming data multiple time. Concept evolution in streaming data further magnifies the challenge of working with streaming data. Clustering is a data stream mining task which is very useful to gain insight of data and data characteristics. Clustering is also used as a pre-processing step in over all mining process for an example clustering is used for outlier detection and for building classification model. In this paper we will focus on the challenges and necessary features of data stream clustering techniques, review and compare the literature for data stream clustering by example and variable, describe some real world applications of data stream clustering, and tools for data stream clustering.",
"title": ""
},
{
"docid": "ce1d25b3d2e32f903ce29470514abcce",
"text": "We present a method to generate a robot control strategy that maximizes the probability to accomplish a task. The task is given as a Linear Temporal Logic (LTL) formula over a set of properties that can be satisfied at the regions of a partitioned environment. We assume that the probabilities with which the properties are satisfied at the regions are known, and the robot can determine the truth value of a proposition only at the current region. Motivated by several results on partitioned-based abstractions, we assume that the motion is performed on a graph. To account for noisy sensors and actuators, we assume that a control action enables several transitions with known probabilities. We show that this problem can be reduced to the problem of generating a control policy for a Markov Decision Process (MDP) such that the probability of satisfying an LTL formula over its states is maximized. We provide a complete solution for the latter problem that builds on existing results from probabilistic model checking. We include an illustrative case study.",
"title": ""
},
{
"docid": "284c52c29b5a5c2d3fbd0a7141353e35",
"text": "This paper presents results of patient experiments using a new gait-phase detection sensor (GPDS) together with a programmable functional electrical stimulation (FES) system for subjects with a dropped-foot walking dysfunction. The GPDS (sensors and processing unit) is entirely embedded in a shoe insole and detects in real time four phases (events) during the gait cycle: stance, heel off, swing, and heel strike. The instrumented GPDS insole consists of a miniature gyroscope that measures the angular velocity of the foot and three force sensitive resistors that measure the force load on the shoe insole at the heel and the metatarsal bones. The extracted gait-phase signal is transmitted from the embedded microcontroller to the electrical stimulator and used in a finite state control scheme to time the electrical stimulation sequences. The electrical stimulations induce muscle contractions in the paralyzed muscles leading to a more physiological motion of the affected leg. The experimental results of the quantitative motion analysis during walking of the affected and nonaffected sides showed that the use of the combined insole and FES system led to a significant improvement in the gait-kinematics of the affected leg. This combined sensor and stimulation system has the potential to serve as a walking aid for rehabilitation training or permanent use in a wide range of gait disabilities after brain stroke, spinal-cord injury, or neurological diseases.",
"title": ""
},
{
"docid": "5275184686a8453a1922cec7a236b66d",
"text": "Children’s sense of relatedness is vital to their academic motivation from 3rd to 6th grade. Children’s (n 641) reports of relatedness predicted changes in classroom engagement over the school year and contributed over and above the effects of perceived control. Regression and cumulative risk analyses revealed that relatedness to parents, teachers, and peers each uniquely contributed to students’ engagement, especially emotional engagement. Girls reported higher relatedness than boys, but relatedness to teachers was a more salient predictor of engagement for boys. Feelings of relatedness to teachers dropped from 5th to 6th grade, but the effects of relatedness on engagement were stronger for 6th graders. Discussion examines theoretical, empirical, and practical implications of relatedness as a key predictor of children’s academic motivation and performance.",
"title": ""
},
{
"docid": "0b8c51f823cb55cbccfae098e98f28b3",
"text": "In this study, we investigate whether the “out of body” vibrotactile illusion known as funneling could be applied to enrich and thereby improve the interaction performance on a tablet-sized media device. First, a series of pilot tests was taken to determine the appropriate operational conditions and parameters (such as the tablet size, holding position, minimal required vibration amplitude, and the effect of matching visual feedback) for a two-dimensional (2D) illusory tactile rendering method. Two main experiments were then conducted to validate the basic applicability and effectiveness of the rendering method, and to further demonstrate how the illusory tactile feedback could be deployed in an interactive application and actually improve user performance. Our results showed that for a tablet-sized device (e.g., iPad mini and iPad), illusory perception was possible (localization performance of up to 85%) using a rectilinear grid with a resolution of 5 $$\\times $$ × 7 (grid size: 2.5 cm) with matching visual feedback. Furthermore, the illusory feedback was found to be a significant factor in improving the user performance in a 2D object search/attention task.",
"title": ""
},
{
"docid": "77df82cf7a9ddca2038433fa96a43cef",
"text": "In this study, new algorithms are proposed for exposing forgeries in soccer images. We propose a new and automatic algorithm to extract the soccer field, field side and the lines of field in order to generate an image of real lines for forensic analysis. By comparing the image of real lines and the lines in the input image, the forensic analyzer can easily detect line displacements of the soccer field. To expose forgery in the location of a player, we measure the height of the player using the geometric information in the soccer image and use the inconsistency of the measured height with the true height of the player as a clue for detecting the displacement of the player. In this study, two novel approaches are proposed to measure the height of a player. In the first approach, the intersections of white lines in the soccer field are employed for automatic calibration of the camera. We derive a closed-form solution to calculate different camera parameters. Then the calculated parameters of the camera are used to measure the height of a player using an interactive approach. In the second approach, the geometry of vanishing lines and the dimensions of soccer gate are used to measure a player height. Various experiments using real and synthetic soccer images show the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "8b84dc47c6a9d39ef1d094aa173a954c",
"text": "Named entity recognition (NER) is a subtask of information extraction that seeks to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. We use the JavaNLP repository(http://nlp.stanford.edu/javanlp/ ) for its implementation of a Conditional Random Field(CRF) and a Conditional Markov Model(CMM), also called a Maximum Entropy Markov Model. We have obtained results on majority voting with different labeling schemes, with backward and forward parsing of the CMM, and also some results when we trained a decision tree to take a decision based on the outputs of the different labeling schemes. We have also tried to solve the problem of label inconsistency issue by attempting the naive approach of enforcing hard label-consistency by choosing the majority entity for a sequence of tokens, in the specific test document, as well as the whole test corpus, and managed to get reasonable gains. We also attempted soft label consistency in the following way. We use a portion of the training data to train a CRF to make predictions on the rest of the train data and on the test data. We then train a second CRF with the majority label predictions as additional input features.",
"title": ""
},
{
"docid": "2c1f93d4e517fe56a5ebf668e8a0bc12",
"text": "The Internet was designed with the end-to-end principle where the network layer provided merely the best-effort forwarding service. This design makes it challenging to add new services into the Internet infrastructure. However, as the Internet connectivity becomes a commodity, users and applications increasingly demand new in-network services. This paper proposes PacketCloud, a cloudlet-based open platform to host in-network services. Different from standalone, specialized middleboxes, cloudlets can efficiently share a set of commodity servers among different services, and serve the network traffic in an elastic way. PacketCloud can help both Internet Service Providers (ISPs) and emerging application/content providers deploy their services at strategic network locations. We have implemented a proof-of-concept prototype of PacketCloud. PacketCloud introduces a small additional delay, and can scale well to handle high-throughput data traffic. We have evaluated PacketCloud in both a fully functional emulated environment, and the real Internet.",
"title": ""
},
{
"docid": "4f2ebb2640a36651fd8c01f3eeb0e13e",
"text": "This paper addresses pixel-level segmentation of a human body from a single image. The problem is formulated as a multi-region segmentation where the human body is constrained to be a collection of geometrically linked regions and the background is split into a small number of distinct zones. We solve this problem in a Bayesian framework for jointly estimating articulated body pose and the pixel-level segmentation of each body part. Using an image likelihood function that simultaneously generates and evaluates the image segmentation corresponding to a given pose, we robustly explore the posterior body shape distribution using a data-driven, coarse-to-fine Metropolis Hastings sampling scheme that includes a strongly data-driven proposal term.",
"title": ""
},
{
"docid": "6bc611936d412dde15999b2eb179c9e2",
"text": "Smith-Lemli-Opitz syndrome, a severe developmental disorder associated with multiple congenital anomalies, is caused by a defect of cholesterol biosynthesis. Low cholesterol and high concentrations of its direct precursor, 7-dehydrocholesterol, in plasma and tissues are the diagnostic biochemical hallmarks of the syndrome. The plasma sterol concentrations correlate with severity and disease outcome. Mutations in the DHCR7 gene lead to deficient activity of 7-dehydrocholesterol reductase (DHCR7), the final enzyme of the cholesterol biosynthetic pathway. The human DHCR7 gene is localised on chromosome 11q13 and its structure has been characterized. Ninetyone different mutations in the DHCR7 gene have been published to date. This paper is a review of the clinical, biochemical and molecular genetic aspects.",
"title": ""
},
{
"docid": "9825e8a24aba301c4c7be3b8b4c4dde5",
"text": "Being a cross-camera retrieval task, person re-identification suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle) adaptation. CamStyle can serve as a data augmentation approach that smooths the camera style disparities. Specifically, with CycleGAN, labeled training images can be style-transferred to each camera, and, along with the original training samples, form the augmented training set. This method, while increasing data diversity against over-fitting, also incurs a considerable level of noise. In the effort to alleviate the impact of noise, the label smooth regularization (LSR) is adopted. The vanilla version of our method (without LSR) performs reasonably well on few-camera systems in which over-fitting often occurs. With LSR, we demonstrate consistent improvement in all systems regardless of the extent of over-fitting. We also report competitive accuracy compared with the state of the art. Code is available at: https://github.com/zhunzhong07/CamStyle",
"title": ""
},
{
"docid": "b876e62db8a45ab17d3a9d217e223eb7",
"text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.",
"title": ""
},
{
"docid": "88fa70ef8c6dfdef7d1c154438ff53c2",
"text": "There has been substantial progress in the field of text based sentiment analysis but little effort has been made to incorporate other modalities. Previous work in sentiment analysis has shown that using multimodal data yields to more accurate models of sentiment. Efforts have been made towards expressing sentiment as a spectrum of intensity rather than just positive or negative. Such models are useful not only for detection of positivity or negativity, but also giving out a score of how positive or negative a statement is. Based on the state of the art studies in sentiment analysis, prediction in terms of sentiment score is still far from accurate, even in large datasets [27]. Another challenge in sentiment analysis is dealing with small segments or micro opinions as they carry less context than large segments thus making analysis of the sentiment harder. This paper presents a Ph.D. thesis shaped towards comprehensive studies in multimodal micro-opinion sentiment intensity analysis.",
"title": ""
},
{
"docid": "9924e44d94d00a7a3dbd313409f5006a",
"text": "Multiple-instance problems arise from the situations where training class labels are attached to sets of samples (named bags), instead of individual samples within each bag (called instances). Most previous multiple-instance learning (MIL) algorithms are developed based on the assumption that a bag is positive if and only if at least one of its instances is positive. Although the assumption works well in a drug activity prediction problem, it is rather restrictive for other applications, especially those in the computer vision area. We propose a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple-instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. MILES maps each bag into a feature space defined by the instances in the training bags via an instance similarity measure. This feature mapping often provides a large number of redundant or irrelevant features. Hence, 1-norm SVM is applied to select important features as well as construct classifiers simultaneously. We have performed extensive experiments. In comparison with other methods, MILES demonstrates competitive classification accuracy, high computation efficiency, and robustness to labeling uncertainty",
"title": ""
},
{
"docid": "08d8e372c5ae4eef9848552ee87fbd64",
"text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical …",
"title": ""
}
] |
scidocsrr
|
faf711062699daf00fac5ffac48e9e17
|
Exploring the role of customer relationship management (CRM) systems in customer knowledge creation
|
[
{
"docid": "de3aee8ca694d59eb0ef340b3b1c8161",
"text": "In recent years, organisations have begun to realise the importance of knowing their customers better. Customer relationship management (CRM) is an approach to managing customer related knowledge of increasing strategic significance. The successful adoption of IT-enabled CRM redefines the traditional models of interaction between businesses and their customers, both nationally and globally. It is regarded as a source for competitive advantage because it enables organisations to explore and use knowledge of their customers and to foster profitable and long-lasting one-to-one relationships. This paper discusses the results of an exploratory survey conducted in the UK financial services sector; it discusses CRM practice and expectations, the motives for implementing it, and evaluates post-implementation experiences. It also investigates the CRM tools functionality in the strategic, process, communication, and business-to-customer (B2C) organisational context and reports the extent of their use. The results show that despite the anticipated potential, the benefits from such tools are rather small. # 2004 Published by Elsevier B.V.",
"title": ""
}
] |
[
{
"docid": "ca20f416a3809a0a06d76d08697bcc4b",
"text": "BACKGROUND\nManual labor in the Agriculture, Forestry, and Fishing (AgFF) Sector is provided primarily by immigrant workers. Limited information is available that documents the demographic characteristics of these manual workers, the occupational illnesses, injuries and fatalities they experience; or the risk factors to which they are exposed.\n\n\nMETHODS\nA working conference of experts on occupational health in the AgFF Sector was held to address information limitations. This paper provides an overview of the conference. Other reports address organization of work, health outcomes, healthcare access, and safety policy.\n\n\nCONTENTS\nThis report addresses how best to define the population and the AgFF Sector, occupational exposures for the sector, data limitations, characteristics of immigrant workers, reasons for concern for immigrant workers in the AgFF Sector, regulations, a conceptual model for occupational health, and directions for research and intervention.",
"title": ""
},
{
"docid": "6d394ccc32b958d5ffbd34856b1bace4",
"text": "Interferometric synthetic aperture radar (InSAR) correlation, a measure of the similarity of two radar echoes, provides a quantitative measure of surface and subsurface scattering properties and hence surface composition and structure. Correlation is observed by comparing the radar return across several nearby radar image pixels, but estimates of correlation are biased by finite data sample size and any underlying interferometer fringe pattern. We present a method for correcting bias in InSAR correlation measurements resulting in significantly more accurate estimates, so that inverse models of surface properties are more useful. We demonstrate the value of the approach using data collected over Antarctica by the Radarsat spacecraft.",
"title": ""
},
{
"docid": "65209c3ce517aa7cdcdb3a7106ffe9f2",
"text": "This paper presents first results of the Networking and Cryptography library (NaCl) on the 8-bit AVR family of microcontrollers. We show that NaCl, which has so far been optimized mainly for different desktop and server platforms, is feasible on resource-constrained devices while being very fast and memory efficient. Our implementation shows that encryption using Salsa20 requires 268 cycles/byte, authentication using Poly1305 needs 195 cycles/byte, a Curve25519 scalar multiplication needs 22 791 579 cycles, signing of data using Ed25519 needs 23 216 241 cycles, and verification can be done within 32 634 713 cycles. All implemented primitives provide at least 128-bit security, run in constant time, do not use secret-data-dependent branch conditions, and are open to the public domain (no usage restrictions).",
"title": ""
},
{
"docid": "be8cfa012ffba4ee8017c3e299a88fb0",
"text": "The present study examined (1) the impact of a brief substance use intervention on delay discounting and indices of substance reward value (RV), and (2) whether baseline values and posttreatment change in these behavioral economic variables predict substance use outcomes. Participants were 97 heavy drinking college students (58.8% female, 41.2% male) who completed a brief motivational intervention (BMI) and then were randomized to one of two conditions: a supplemental behavioral economic intervention that attempted to increase engagement in substance-free activities associated with delayed rewards (SFAS) or an Education control (EDU). Demand intensity, and Omax, decreased and elasticity significantly increased after treatment, but there was no effect for condition. Both baseline values and change in RV, but not discounting, predicted substance use outcomes at 6-month follow-up. Students with high RV who used marijuana were more likely to reduce their use after the SFAS intervention. These results suggest that brief interventions may reduce substance reward value, and that changes in reward value are associated with subsequent drinking and drug use reductions. High RV marijuana users may benefit from intervention elements that enhance future time orientation and substance-free activity participation.",
"title": ""
},
{
"docid": "2dd9bb2536fdc5e040544d09fe3dd4fa",
"text": "Low 1/f noise, low-dropout (LDO) regulators are becoming critical for the supply regulation of deep-submicron analog baseband and RF system-on-chip designs. A low-noise, high accuracy LDO regulator (LN-LDO) utilizing a chopper stabilized error amplifier is presented. In order to achieve fast response during load transients, a current-mode feedback amplifier (CFA) is designed as a second stage driving the regulation FET. In order to reduce clock feed-through and 1/f noise accumulation at the chopping frequency, a first-order digital SigmaDelta noise-shaper is used for chopping clock spectral spreading. With up to 1 MHz noise-shaped modulation clock, the LN-LDO achieves a noise spectral density of 32 nV/radic(Hz) and a PSR of 38 dB at 100 kHz. The proposed LDO is shown to reduce the phase noise of an integrated 32 MHz temperature compensated crystal oscillator (TCXO) at 10 kHz offset by 15 dB. Due to reduced 1/f noise requirements, the error amplifier silicon area is reduced by 75%, and the overall regulator area is reduced by 50% with respect to an equivalent noise static regulator. The current-mode feedback second stage buffer reduces regulator settling time by 60% in comparison to an equivalent power consumption voltage mode buffer, achieving 0.6 mus settling time for a 25-mA load step. The LN-LDO is designed and fabricated on a 0.25 mum CMOS process with five layers of metal, occupying 0.88 mm2.",
"title": ""
},
{
"docid": "c4a74726ac56b0127e5920098e6f0258",
"text": "BACKGROUND\nAttention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.\n\n\nOBJECTIVES\nWe examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.\n\n\nMETHODS\nWe reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms \"ADHD, diagnosis, outcomes.\" We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.\n\n\nRESULTS\nMultiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.\n\n\nCONCLUSION\nADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder.",
"title": ""
},
{
"docid": "212848b1cd0c8e72ff64ac87e0a3805a",
"text": "INTRODUCTION\nSmartphones changed the method by which doctors communicate with each other, offer modern functionalities sensitive to the context of use, and can represent a valuable ally in the healthcare system. Studies have shown that WhatsApp™ application can facilitate communication within the healthcare team and provide the attending physician a constant oversight of activities performed by junior team members. The aim of the study was to use WhatsApp between two distant surgical teams involved in a program of elective surgery to verify if it facilitates communication, enhances learning, and improves patient care preserving their privacy.\n\n\nMETHODS\nWe conducted a focused group of surgeons over a 28-month period (from March 2013 to July 2015), and from September 2014 to July 2015, a group of selected specialists communicated healthcare matters through the newly founded \"WhatsApp Surgery Group.\" Each patient enrolled in the study signed a consent form to let the team communicate his/her clinical data using WhatsApp. Communication between team members, response times, and types of messages were evaluated.\n\n\nRESULTS\nForty six (n = 46) patients were enrolled in the study. A total of 1,053 images were used with an average of 78 images for each patient (range 41-143). 125 h of communication were recorded, generating 354 communication events. The expert surgeon had received the highest number of questions (P, 0.001), while the residents asked clinical questions (P, 0.001) and were the fastest responders to communications (P, 0.001).\n\n\nCONCLUSION\nOur study investigated how two distant clinical teams may exploit such a communication system and quantifies both the direction and type of communication between surgeons. WhatsApp is a low cost, secure, and fast technology and it offers the opportunity to facilitate clinical and nonclinical communications, enhance learning, and improve patient care preserving their privacy.",
"title": ""
},
{
"docid": "2586eaf8556ead1c085165569f9936b2",
"text": "SQL injection attack poses a serious security threats among the Internet community nowadays and it's continue to increase exploiting flaws found in the Web applications. In SQL injection attack, the attackers can take advantage of poorly coded web application software to introduce malicious code into the system and/or could retrieve important information. Web applications are under siege from cyber criminals seeking to steal confidential information and disable or damage the services offered by these application. Therefore, additional steps must be taken to ensure data security and integrity of the applications. In this paper we propose an innovative solution to filter the SQL injection attack using SNORT IDS. The proposed detection technique uses SNORT tool by augmenting a number of additional SNORT rules. We evaluate the proposed solution by comparing our method with several existing techniques. Experimental results demonstrate that the proposed method outperforms other similar techniques using the same data set.",
"title": ""
},
{
"docid": "ce53aa803d587301a47166c483ecec34",
"text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.",
"title": ""
},
{
"docid": "1c80fdc30b2b37443367dae187fbb376",
"text": "The web is a catalyst for drawing people together around shared goals, but many groups never reach critical mass. It can thus be risky to commit time or effort to a goal: participants show up only to discover that nobody else did, and organizers devote significant effort to causes that never get off the ground. Crowdfunding has lessened some of this risk by only calling in donations when an effort reaches a collective monetary goal. However, it leaves unsolved the harder problem of mobilizing effort, time and participation. We generalize the concept into activation thresholds, commitments that are conditioned on others' participation. With activation thresholds, supporters only need to show up for an event if enough other people commit as well. Catalyst is a platform that introduces activation thresholds for on-demand events. For more complex coordination needs, Catalyst also provides thresholds based on time or role (e.g., a bake sale requiring commitments for bakers, decorators, and sellers). In a multi-month field deployment, Catalyst helped users organize events including food bank volunteering, on-demand study groups, and mass participation events like a human chess game. Our results suggest that activation thresholds can indeed catalyze a large class of new collective efforts.",
"title": ""
},
{
"docid": "3dcfd937b9c1ae8ccc04c6a8a99c71f5",
"text": "Automatically generated fake restaurant reviews are a threat to online review systems. Recent research has shown that users have difficulties in detecting machine-generated fake reviews hiding among real restaurant reviews. The method used in this work (char-LSTM ) has one drawback: it has difficulties staying in context, i.e. when it generates a review for specific target entity, the resulting review may contain phrases that are unrelated to the target, thus increasing its detectability. In this work, we present and evaluate a more sophisticated technique based on neural machine translation (NMT) with which we can generate reviews that stay on-topic. We test multiple variants of our technique using native English speakers on Amazon Mechanical Turk. We demonstrate that reviews generated by the best variant have almost optimal undetectability (class-averaged F-score 47%). We conduct a user study with experienced users and show that our method evades detection more frequently compared to the state-of-the-art (average evasion 3.2/4 vs 1.5/4) with statistical significance, at level α = 1% (Section 4.3). We develop very effective detection tools and reach average F-score of 97% in classifying these. Although fake reviews are very effective in fooling people, effective automatic detection is still feasible.",
"title": ""
},
{
"docid": "5596f6d7ebe828f4d6f5ab4d94131b1d",
"text": "A successful quality model is indispensable in a rich variety of multimedia applications, e.g., image classification and video summarization. Conventional approaches have developed many features to assess media quality at both low-level and high-level. However, they cannot reflect the process of human visual cortex in media perception. It is generally accepted that an ideal quality model should be biologically plausible, i.e., capable of mimicking human gaze shifting as well as the complicated visual cognition. In this paper, we propose a biologically inspired quality model, focusing on interpreting how humans perceive visually and semantically important regions in an image (or a video clip). Particularly, we first extract local descriptors (graphlets in this work) from an image/frame. They are projected onto the perceptual space, which is built upon a set of low-level and high-level visual features. Then, an active learning algorithm is utilized to select graphlets that are both visually and semantically salient. The algorithm is based on the observation that each graphlet can be linearly reconstructed by its surrounding ones, and spatially nearer ones make a greater contribution. In this way, both the local and global geometric properties of an image/frame can be encoded in the selection process. These selected graphlets are linked into a so-called biological viewing path (BVP) to simulate human visual perception. Finally, the quality of an image or a video clip is predicted by a probabilistic model. Experiments shown that 1) the predicted BVPs are over 90% consistent with real human gaze shifting paths on average; and 2) our quality model outperforms many of its competitors remarkably.",
"title": ""
},
{
"docid": "a1774a08ffefd28785fbf3a8f4fc8830",
"text": "Bounds are given for the empirical and expected Rademacher complexity of classes of linear transformations from a Hilbert space H to a
nite dimensional space. The results imply generalization guarantees for graph regularization and multi-task subspace learning. 1 Introduction Rademacher averages have been introduced to learning theory as an e¢ cient complexity measure for function classes, motivated by tight, sample or distribution dependent generalization bounds ([10], [2]). Both the de
nition of Rademacher complexity and the generalization bounds extend easily from realvalued function classes to function classes with values in R, as they are relevant to multi-task learning ([1], [12]). There has been an increasing interest in multi-task learning which has shown to be very e¤ective in experiments ([7], [1]), and there have been some general studies of its generalisation performance ([4], [5]). For a large collection of tasks there are usually more data available than for a single task and these data may be put to a coherent use by some constraint of relatedness. A practically interesting case is linear multi-task learning, extending linear large margin classi
ers to vector valued large-margin classi
ers. Di¤erent types of constraints have been proposed: Evgeniou et al ([8], [9]) propose graph regularization, where the vectors de
ning the classi
ers of related tasks have to be near each other. They also show that their scheme can be implemented in the framework of kernel machines. Ando and Zhang [1] on the other hand require the classi
ers to be members of a common low dimensional subspace. They also give generalization bounds using Rademacher complexity, but these bounds increase with the dimension of the input space. This paper gives dimension free bounds which apply to both approaches. 1.1 Multi-task generalization and Rademacher complexity Suppose we have m classi
cation tasks, represented by m independent random variables X ; Y l taking values in X f 1; 1g, where X l models the random",
"title": ""
},
{
"docid": "ecb93affc7c9b0e4bf86949d3f2006d4",
"text": "We present data-dependent learning bounds for the general scenario of non-stationary nonmixing stochastic processes. Our learning guarantees are expressed in terms of a datadependent measure of sequential complexity and a discrepancy measure that can be estimated from data under some mild assumptions. We also also provide novel analysis of stable time series forecasting algorithm using this new notion of discrepancy that we introduce. We use our learning bounds to devise new algorithms for non-stationary time series forecasting for which we report some preliminary experimental results. An extended abstract has appeared in (Kuznetsov and Mohri, 2015).",
"title": ""
},
{
"docid": "f1d69b033490ed8c4eec7b476e9b7c08",
"text": "Performance-based measures of emotional intelligence (EI) are more likely than measures based on self-report to assess EI as a construct distinct from personality. A multivariate investigation was conducted with the performance-based, Multi-Factor Emotional Intelligence Scale (MEIS; J. D. Mayer, D. Caruso, & P. Salovey, 1999). Participants (N = 704) also completed the Trait Self-Description Inventory (TSDI, a measure of the Big Five personality factors; Christal, 1994; R. D. Roberts et al.), and the Armed Services Vocational Aptitude Battery (ASVAB, a measure of intelligence). Results were equivocal. Although the MEIS showed convergent validity (correlating moderately with the ASVAB) and divergent validity (correlating minimally with the TSDI), different scoring protocols (i.e., expert and consensus) yielded contradictory findings. Analyses of factor structure and subscale reliability identified further measurement problems. Overall, it is questionable whether the MEIS operationalizes EI as a reliable and valid construct.",
"title": ""
},
{
"docid": "e48b39ce7d5b9cc55dcf7d80ca00d4cd",
"text": "To efficiently extract local and global features in face description and recognition, a pyramid-based multi-scale LBP approach is proposed. Firstly, the face image pyramid is constructed through multi-scale analysis. Then the LBP operator is applied to each level of the image pyramid to extract facial features under various scales. Finally, all the extracted features are concatenated into an enhanced feature vector which is used as the face descriptor. Experimental results on ORL and FERET face databases show that the proposed LBP representation is highly efficient with good performance in face recognition and is robust to illumination, facial expression and position variation.",
"title": ""
},
{
"docid": "3b5216dfbd7b12cf282311d645b10a38",
"text": "3D CAD systems are used in product design for simultaneous engineering and to improve productivity. CAD tools can substantially enhance design performance. Although 3D CAD is a widely used and highly effective tool in mechanical design, mastery of CAD skills is complex and time-consuming. The concepts of parametric–associative models and systems are powerful tools whose efficiency is proportional to the complexity of their implementation. The availability of a framework for actions that can be taken to improve CAD efficiency can therefore be highly beneficial. Today, a clear and structured approach does not exist in this way for CAD methodology deployment. The novelty of this work is therefore to propose a general strategy for utilizing the advantages of parametric CAD in the automotive industry in the form of a roadmap. The main stages of the roadmap are illustrated by means of industrial use cases. The first results of his research are discussed and suggestions for future work are given. © 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "901924cc7e0e6177ac6727a183abc808",
"text": "In this paper we tackle the problem of document image retrieval by combining a similarity measure between documents and the probability that a given document belongs to a certain class. The membership probability to a specific class is computed using Support Vector Machines in conjunction with similarity measure based kernel applied to structural document representations. In the presented experiments, we use different document representations, both visual and structural, and we apply them to a database of historical documents. We show how our method based on similarity kernels outperforms the usual distance-based retrieval.",
"title": ""
},
{
"docid": "cd450942c0acc63d0018e3662a1d69ba",
"text": "By fractionating conditioned medium (CM) from Drosophila imaginal disc cell cultures, we have identified a family of Imaginal Disc Growth Factors (IDGFs), which are the first polypeptide growth factors to be reported from invertebrates. The active fraction from CM, as well as recombinant IDGFs, cooperate with insulin to stimulate the proliferation, polarization and motility of imaginal disc cells. The IDGF family in Drosophila includes at least five members, three of which are encoded by three genes in a tight cluster. The proteins are structurally related to chitinases, but they show an amino acid substitution that is known to abrogate catalytic activity. It therefore seems likely that they have evolved from chitinases but acquired a new growth-promoting function. The IDGF genes are expressed most strongly in the embryonic yolk cells and in the fat body of the embryo and larva. The predicted molecular structure, expression patterns, and mitogenic activity of these proteins suggest that they are secreted and transported to target tissues via the hemolymph. However, the genes are also expressed in embryonic epithelia in association with invagination movements, so the proteins may have local as well as systemic functions. Similar proteins are found in mammals and may constitute a novel class of growth factors.",
"title": ""
},
{
"docid": "48a0e75b97fdaa734f033c6b7791e81f",
"text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.",
"title": ""
}
] |
scidocsrr
|
eeff77ed1001e391788287a6cca55ea0
|
On the Dynamics of Social Media Popularity: A YouTube Case Study
|
[
{
"docid": "64c6012d2e97a1059161c295ae3b9cdb",
"text": "One of the most popular user activities on the Web is watching videos. Services like YouTube, Vimeo, and Hulu host and stream millions of videos, providing content that is on par with TV. While some of this content is popular all over the globe, some videos might be only watched in a confined, local region.\n In this work we study the relationship between popularity and locality of online YouTube videos. We investigate whether YouTube videos exhibit geographic locality of interest, with views arising from a confined spatial area rather than from a global one. Our analysis is done on a corpus of more than 20 millions YouTube videos, uploaded over one year from different regions. We find that about 50% of the videos have more than 70% of their views in a single region. By relating locality to viralness we show that social sharing generally widens the geographic reach of a video. If, however, a video cannot carry its social impulse over to other means of discovery, it gets stuck in a more confined geographic region. Finally, we analyze how the geographic properties of a video's views evolve on a daily basis during its lifetime, providing new insights on how the geographic reach of a video changes as its popularity peaks and then fades away.\n Our results demonstrate how, despite the global nature of the Web, online video consumption appears constrained by geographic locality of interest: this has a potential impact on a wide range of systems and applications, spanning from delivery networks to recommendation and discovery engines, providing new directions for future research.",
"title": ""
},
{
"docid": "b45d1003afac487dd3d5477621a85f74",
"text": "Creating, placing, and presenting social media content is a difficult problem. In addition to the quality of the content itself, several factors such as the way the content is presented (the title), the community it is posted to, whether it has been seen before, and the time it is posted determine its success. There are also interesting interactions between these factors. For example, the language of the title should be targeted to the community where the content is submitted, yet it should also highlight the distinctive nature of the content. In this paper, we examine how these factors interact to determine the popularity of social media content. We do so by studying resubmissions, i.e., content that has been submitted multiple times, with multiple titles, to multiple different communities. Such data allows us to ‘tease apart’ the extent to which each factor influences the success of that content. The models we develop help us understand how to better target social media content: by using the right title, for the right community, at the right time.",
"title": ""
},
{
"docid": "3d45de7d6ef9e162552698839550a6ee",
"text": "The queries people issue to a search engine and the results clicked following a query change over time. For example, after the earthquake in Japan in March 2011, the query japan spiked in popularity and people issuing the query were more likely to click government-related results than they would prior to the earthquake. We explore the modeling and prediction of such temporal patterns in Web search behavior. We develop a temporal modeling framework adapted from physics and signal processing and harness it to predict temporal patterns in search behavior using smoothing, trends, periodicities, and surprises. Using current and past behavioral data, we develop a learning procedure that can be used to construct models of users' Web search activities. We also develop a novel methodology that learns to select the best prediction model from a family of predictive models for a given query or a class of queries. Experimental results indicate that the predictive models significantly outperform baseline models that weight historical evidence the same for all queries. We present two applications where new methods introduced for the temporal modeling of user behavior significantly improve upon the state of the art. Finally, we discuss opportunities for using models of temporal dynamics to enhance other areas of Web search and information retrieval.",
"title": ""
},
{
"docid": "0d56b30aef52bfdf2cb6426a834126e5",
"text": "The wide adoption of social media has increased the competition among ideas for our finite attention. We employ a parsimonious agent-based model to study whether such a competition may affect the popularity of different memes, the diversity of information we are exposed to, and the fading of our collective interests for specific topics. Agents share messages on a social network but can only pay attention to a portion of the information they receive. In the emerging dynamics of information diffusion, a few memes go viral while most do not. The predictions of our model are consistent with empirical data from Twitter, a popular microblogging platform. Surprisingly, we can explain the massive heterogeneity in the popularity and persistence of memes as deriving from a combination of the competition for our limited attention and the structure of the social network, without the need to assume different intrinsic values among ideas.",
"title": ""
}
] |
[
{
"docid": "0d6e5e20d6a909a6450671feeb4ac261",
"text": "Rita bakalu, a new species, is described from the Godavari river system in peninsular India. With this finding, the genus Rita is enlarged to include seven species, comprising six species found in South Asia, R. rita, R. macracanthus, R. gogra, R. chrysea, R. kuturnee, R. bakalu, and one species R. sacerdotum from Southeast Asia. R. bakalu is distinguished from its congeners by a combination of the following characters: eye diameter 28–39% HL and 20–22 caudal fin rays; teeth in upper jaw uniformly villiform in two patches, interrupted at the midline; palatal teeth well-developed villiform, in two distinct patches located at the edge of the palate. The mtDNA cytochrome C oxidase I sequence analysis confirmed that the R. bakalu is distinct from the other congeners of Rita. Superficially, R. bakalu resembles R. kuturnee, reported from the Godavari and Krishna river systems; however, the two species are discriminated due to differences in the structure of their teeth patches on upper jaw and palate, anal fin originating before the origin of adipose fin, comparatively larger eye diameter, longer mandibular barbels, and vertebral count. The results conclude that the river Godavari harbors a different species of Rita, R. bakalu which is new to science.",
"title": ""
},
{
"docid": "4f68e4859a717833d214a431b8d796ad",
"text": "Time domain synchronous OFDM (TDS-OFDM) has higher spectral efficiency than cyclic prefix OFDM (CP-OFDM), but suffers from severe performance loss over fast fading channels. In this paper, a novel transmission scheme called time-frequency training OFDM (TFT-OFDM) is proposed. The time-frequency joint channel estimation for TFT-OFDM utilizes the time-domain training sequence without interference cancellation to merely acquire the time delay profile of the channel, while the path coefficients are estimated by using the frequency-domain group pilots. The redundant group pilots only occupy about 1% of the useful subcarriers, thus TFT-OFDM still has much higher spectral efficiency than CP-OFDM by about 10%. Simulation results also demonstrate that TFT-OFDM outperforms CP-OFDM and TDS-OFDM over time-varying channels.",
"title": ""
},
{
"docid": "2944000757568f330b495ba2a446b0a0",
"text": "In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70%. Our method has also been submitted for evaluation as part of the Menpo challenge.",
"title": ""
},
{
"docid": "a05d87b064ab71549d373599700cfcbf",
"text": "We provide sets of parameters for multiplicative linear congruential generators (MLCGs) of different sizes and good performance with respect to the spectral test. For ` = 8, 9, . . . , 64, 127, 128, we take as a modulus m the largest prime smaller than 2`, and provide a list of multipliers a such that the MLCG with modulus m and multiplier a has a good lattice structure in dimensions 2 to 32. We provide similar lists for power-of-two moduli m = 2`, for multiplicative and non-multiplicative LCGs.",
"title": ""
},
{
"docid": "86cc0465767c9e079465df61c52c8398",
"text": "Songbirds learn their songs by trial-and-error experimentation, producing highly variable vocal output as juveniles. By comparing their own sounds to the song of a tutor, young songbirds gradually converge to a stable song that can be a remarkably good copy of the tutor song. Here we show that vocal variability in the learning songbird is induced by a basal-ganglia-related circuit, the output of which projects to the motor pathway via the lateral magnocellular nucleus of the nidopallium (LMAN). We found that pharmacological inactivation of LMAN dramatically reduced acoustic and sequence variability in the songs of juvenile zebra finches, doing so in a rapid and reversible manner. In addition, recordings from LMAN neurons projecting to the motor pathway revealed highly variable spiking activity across song renditions, showing that LMAN may act as a source of variability. Lastly, pharmacological blockade of synaptic inputs from LMAN to its target premotor area also reduced song variability. Our results establish that, in the juvenile songbird, the exploratory motor behavior required to learn a complex motor sequence is dependent on a dedicated neural circuit homologous to cortico-basal ganglia circuits in mammals.",
"title": ""
},
{
"docid": "d2d16580335dcff2f0d05ca8a43438ef",
"text": "Evolutionary adaptation can be rapid and potentially help species counter stressful conditions or realize ecological opportunities arising from climate change. The challenges are to understand when evolution will occur and to identify potential evolutionary winners as well as losers, such as species lacking adaptive capacity living near physiological limits. Evolutionary processes also need to be incorporated into management programmes designed to minimize biodiversity loss under rapid climate change. These challenges can be met through realistic models of evolutionary change linked to experimental data across a range of taxa.",
"title": ""
},
{
"docid": "9a2b8f7e82647a7e63af839bff2412aa",
"text": "The user's understanding of information needs and the information available in the data collection can evolve during an exploratory search session. Search systems tailored for well-defined narrow search tasks may be suboptimal for exploratory search where the user can sequentially refine the expressions of her information needs and explore alternative search directions. A major challenge for exploratory search systems design is how to support such behavior and expose the user to relevant yet novel information that can be difficult to discover by using conventional query formulation techniques. We introduce IntentStreams, a system for exploratory search that provides interactive query refinement mechanisms and parallel visualization of search streams. The system models each search stream via an intent model allowing rapid user feedback. The user interface allows swift initiation of alternative and parallel search streams by direct manipulation that does not require typing. A study with 13 participants shows that IntentStreams provides better support for branching behavior compared to a conventional search system.",
"title": ""
},
{
"docid": "74d84a74edd2a18387d6ac73f2c2b8d5",
"text": "The continued increase in the atmospheric concentration of carbon dioxide due to anthropogenic emissions is predicted to lead to significant changes in climate. About half of the current emissions are being absorbed by the ocean and by land ecosystems, but this absorption is sensitive to climate as well as to atmospheric carbon dioxide concentrations, creating a feedback loop. General circulation models have generally excluded the feedback between climate and the biosphere, using static vegetation distributions and CO2 concentrations from simple carbon-cycle models that do not include climate change. Here we present results from a fully coupled, three-dimensional carbon–climate model, indicating that carbon-cycle feedbacks could significantly accelerate climate change over the twenty-first century. We find that under a ‘business as usual’ scenario, the terrestrial biosphere acts as an overall carbon sink until about 2050, but turns into a source thereafter. By 2100, the ocean uptake rate of 5 Gt C yr-1 is balanced by the terrestrial carbon source, and atmospheric CO2 concentrations are 250 p.p.m.v. higher in our fully coupled simulation than in uncoupled carbon models, resulting in a global-mean warming of 5.5 K, as compared to 4 K without the carbon-cycle feedback.",
"title": ""
},
{
"docid": "f6a19d26df9acabe9185c4c167520422",
"text": "OBJECTIVE Benign enlargement of the subarachnoid spaces (BESS) is a common finding on imaging studies indicated by macrocephaly in infancy. This finding has been associated with the presence of subdural fluid collections that are sometimes construed as suggestive of abusive head injury. The prevalence of BESS among infants with macrocephaly and the prevalence of subdural collections among infants with BESS are both poorly defined. The goal of this study was to determine the relative frequencies of BESS, hydrocephalus, and subdural collections in a large consecutive series of imaging studies performed for macrocephaly and to determine the prevalence of subdural fluid collections among patients with BESS. METHODS A text search of radiology requisitions identified studies performed for macrocephaly in patients ≤ 2 years of age. Studies of patients with hydrocephalus or acute trauma were excluded. Studies that demonstrated hydrocephalus or chronic subdural hematoma not previously recognized but responsible for macrocephaly were noted but not investigated further. The remaining studies were reviewed for the presence of incidental subdural collections and for measurement of the depth of the subarachnoid space. A 3-point scale was used to grade BESS: Grade 0, < 5 mm; Grade 1, 5-9 mm; and Grade 2, ≥ 10 mm. RESULTS After exclusions, there were 538 studies, including 7 cases of hydrocephalus (1.3%) and 1 large, bilateral chronic subdural hematoma (0.2%). There were incidental subdural collections in 21 cases (3.9%). Two hundred sixty-five studies (49.2%) exhibited Grade 1 BESS, and 46 studies (8.6%) exhibited Grade 2 BESS. The prevalence of incidental subdural collections among studies with BESS was 18 of 311 (5.8%). The presence of BESS was associated with a greater prevalence of subdural collections, and higher grades of BESS were associated with increasing prevalence of subdural collections. After controlling for imaging modality, the odds ratio of the association of BESS with subdural collections was 3.68 (95% CI 1.12-12.1, p = 0.0115). There was no association of race, sex, or insurance status with subdural collections. Patients with BESS had larger head circumference Z-scores, but there was no association of head circumference or age with subdural collections. Interrater reliability in the diagnosis and grading of BESS was only fair. CONCLUSIONS The current study confirms the association of BESS with incidental subdural collections and suggests that greater depth of the subarachnoid space is associated with increased prevalence of such collections. These observations support the theory that infants with BESS have a predisposition to subdural collections on an anatomical basis. Incidental subdural collections in the setting of BESS are not necessarily indicative of abusive head injury.",
"title": ""
},
{
"docid": "2d845ef6552b77fb4dd0d784233aa734",
"text": "The timing of the origin of arthropods in relation to the Cambrian explosion is still controversial, as are the timing of other arthropod macroevolutionary events such as the colonization of land and the evolution of flight. Here we assess the power of a phylogenomic approach to shed light on these major events in the evolutionary history of life on earth. Analyzing a large phylogenomic dataset (122 taxa, 62 genes) with a Bayesian-relaxed molecular clock, we simultaneously reconstructed the phylogenetic relationships and the absolute times of divergences among the arthropods. Simulations were used to test whether our analysis could distinguish between alternative Cambrian explosion scenarios with increasing levels of autocorrelated rate variation. Our analyses support previous phylogenomic hypotheses and simulations indicate a Precambrian origin of the arthropods. Our results provide insights into the 3 independent colonizations of land by arthropods and suggest that evolution of insect wings happened much earlier than the fossil record indicates, with flight evolving during a period of increasing oxygen levels and impressively large forests. These and other findings provide a foundation for macroevolutionary and comparative genomic study of Arthropoda.",
"title": ""
},
{
"docid": "bea1aab100753e782527f631c1b110c1",
"text": "The great content diversity of real-world digital images poses a grand challenge to image quality assessment (IQA) models, which are traditionally designed and validated on a handful of commonly used IQA databases with very limited content variation. To test the generalization capability and to facilitate the wide usage of IQA techniques in real-world applications, we establish a large-scale database named the Waterloo Exploration Database, which in its current state contains 4744 pristine natural images and 94 880 distorted images created from them. Instead of collecting the mean opinion score for each image via subjective testing, which is extremely difficult if not impossible, we present three alternative test criteria to evaluate the performance of IQA models, namely, the pristine/distorted image discriminability test, the listwise ranking consistency test, and the pairwise preference consistency test (P-test). We compare 20 well-known IQA models using the proposed criteria, which not only provide a stronger test in a more challenging testing environment for existing models, but also demonstrate the additional benefits of using the proposed database. For example, in the P-test, even for the best performing no-reference IQA model, more than 6 million failure cases against the model are “discovered” automatically out of over 1 billion test pairs. Furthermore, we discuss how the new database may be exploited using innovative approaches in the future, to reveal the weaknesses of existing IQA models, to provide insights on how to improve the models, and to shed light on how the next-generation IQA models may be developed. The database and codes are made publicly available at: https://ece.uwaterloo.ca/~k29ma/exploration/.",
"title": ""
},
{
"docid": "dbdda952c63b7b7a4f8ce68f806e5238",
"text": "This paper examines how real-time information gathered as part of intelligent transportation systems can be used to predict link travel times for one through five time periods ahead (of 5-min duration). The study employed a spectral basis artificial neural network (SNN) that utilizes a sinusoidal transformation technique to increase the linear separability of the input features. Link travel times from Houston that had been collected as part of the automatic vehicle identification system of the TranStar system were used as a test bed. It was found that the SNN outperformed a conventional artificial neural network and gave similar results to that of modular neural networks. However, the SNN requires significantly less effort on the part of the modeler than modular neural networks. The results of the best SNN were compared with conventional link travel time prediction techniques including a Kalman filtering model, exponential smoothing model, historical profile, and realtime profile. It was found that the SNN gave the best overall results.",
"title": ""
},
{
"docid": "14077e87744089bb731085590be99a75",
"text": "The Vehicle Routing Problem (VRP) is an important problem occurring in many logistics systems. The objective of VRP is to serve a set of customers at minimum cost, such that every node is visited by exactly one vehicle only once. In this paper, we consider the Dynamic Vehicle Routing Problem (DVRP) which new customer demands are received along the day. Hence, they must be serviced at their locations by a set of vehicles in real time minimizing the total travel distance. The main goal of this research is to find a solution of DVRP using genetic algorithm. However we used some heuristics in addition during generation of the initial population and crossover for tuning the system to obtain better result. The computational experiments were applied to 22 benchmarks instances with up to 385 customers and the effectiveness of the proposed approach is validated by comparing the computational results with those previously presented in the literature.",
"title": ""
},
{
"docid": "b387476c4ff2b2b5ed92a23c7f065026",
"text": "In this article, I review the diagnostic criteria for Gender Identity Disorder (GID) in children as they were formulated in the DSM-III, DSM-III-R, and DSM-IV. The article focuses on the cumulative evidence for diagnostic reliability and validity. It does not address the broader conceptual discussion regarding GID as \"disorder,\" as this issue is addressed in a companion article by Meyer-Bahlburg (2009). This article addresses criticisms of the GID criteria for children which, in my view, can be addressed by extant empirical data. Based in part on reanalysis of data, I conclude that the persistent desire to be of the other gender should, in contrast to DSM-IV, be a necessary symptom for the diagnosis. If anything, this would result in a tightening of the diagnostic criteria and may result in a better separation of children with GID from children who display marked gender variance, but without the desire to be of the other gender.",
"title": ""
},
{
"docid": "fdf95905dd8d3d8dcb4388ac921b3eaa",
"text": "Relation classification is associated with many potential applications in the artificial intelligence area. Recent approaches usually leverage neural networks based on structure features such as syntactic or dependency features to solve this problem. However, high-cost structure features make such approaches inconvenient to be directly used. In addition, structure features are probably domaindependent. Therefore, this paper proposes a bidirectional long-short-term-memory recurrent-neuralnetwork (Bi-LSTM-RNN) model based on low-cost sequence features to address relation classification. This model divides a sentence or text segment into five parts, namely two target entities and their three contexts. It learns the representations of entities and their contexts, and uses them to classify relations. We evaluate our model on two standard benchmark datasets in different domains, namely SemEval-2010 Task 8 and BioNLP-ST 2016 Task BB3. In the former dataset, our model achieves comparable performance compared with other models using sequence features. In the latter dataset, our model obtains the third best results compared with other models in the official evaluation. Moreover, we find that the context between two target entities plays the most important role in relation classification. Furthermore, statistic experiments show that the context between two target entities can be used as an approximate replacement of the shortest dependency path when dependency parsing is not used.",
"title": ""
},
{
"docid": "c7351e8ce6d32b281d5bd33b245939c6",
"text": "In TREC 2002 the Berkeley group participated only in the English-Arabic cross-language retrieval (CLIR) track. One Arabic monolingual run and three English-Arabic cross-language runs were submitted. Our approach to the crosslanguage retrieval was to translate the English topics into Arabic using online English-Arabic machine translation systems. The four official runs are named as BKYMON, BKYCL1, BKYCL2, and BKYCL3. The BKYMON is the Arabic monolingual run, and the other three runs are English-to-Arabic cross-language runs. This paper reports on the construction of an Arabic stoplist and two Arabic stemmers, and the experiments on Arabic monolingual retrieval, English-to-Arabic cross-language retrieval.",
"title": ""
},
{
"docid": "8b2c83868c16536910e7665998b2d87e",
"text": "Nowadays organizations turn to any standard procedure to gain a competitive advantage. If sustainable, competitive advantage can bring about benefit to the organization. The aim of the present study was to introduce competitive advantage as well as to assess the impacts of the balanced scorecard as a means to measure the performance of organizations. The population under study included employees of organizations affiliated to the Social Security Department in North Khorasan Province, of whom a total number of 120 employees were selected as the participants in the research sample. Two researcher-made questionnaires with a 5-point Likert scale were used to measure the competitive advantage and the balanced scorecard. Besides, Cronbach's alpha coefficient was used to measure the reliability of the instruments that was equal to 0.74 and 0.79 for competitive advantage and the balanced scorecard, respectively. The data analysis was performed using the structural equation modeling and the results indicated the significant and positive impact of the implementation of the balanced scorecard on the sustainable competitive advantage. © 2015 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "8308fe89676df668e66287a44103980b",
"text": "Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.",
"title": ""
},
{
"docid": "d3765112295d9a4591b438130df59a25",
"text": "This paper presents the design and mathematical model of a lower extremity exoskeleton device used to make paralyzed people walk again. The design takes into account the anatomy of standard human leg with a total of 11 Degrees of freedom (DoF). A CAD model in SolidWorks is presented along with its fabrication and a mathematical model in MATLAB.",
"title": ""
},
{
"docid": "cbc81f267b98cc3f3986552515657b0f",
"text": "Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana .",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.