title
stringlengths
8
300
abstract
stringlengths
0
10k
A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems
The main objective of this paper is to develop a new approach for round robin C P U scheduling a l g o r i t h m which improves the performance of CPU in real time operating system. The proposed Priority based Round-Robin CPU Scheduling algorithm is based on the integration of round-robin and priority scheduling algorithm. It retains the advantage of round robin in reducing starvation and also integrates the advantage of priority scheduling. The proposed algorithm also implements the concept of aging by assigning new priorities to the processes. Existing round robin CPU scheduling algorithm cannot be implemented in real time operating system due to their high context switch rates, large waiting time, large response time, large turnaround time and less throughput. The proposed algorithm improves all the drawbacks of round robin C P U scheduling algorithm. The paper also presents the comparative analysis of proposed algorithm with existing round robin scheduling algorithm on the basis of varying time quantum, average waiting time, average turnaround time and number of context switches.
Two-sided certification: The market for rating agencies
Certifiers contribute to the sound functioning of markets by reducing a symmetric information. They, however, have been heavily criticized during the 2008-09 financial crisis. This paper investigates on which side of the market a monopolistic profit-maximizing certifier offers his service. If the seller demands a rating, the certifier announces the product quality publicly, whereas if the buyer requests a rating it remains his private information. The model shows that the certifier offers his service to sellers and buyers to maximize his own profit with a higher share from the sellers. Overall, certifiers increase welfare in specific markets. Revenue shifts due to the financial crisis are also explained.
Automotive big data: Applications, workloads and infrastructures
Data is increasingly affecting the automotive industry, from vehicle development, to manufacturing and service processes, to online services centered around the connected vehicle. Connected, mobile and Internet of Things devices and machines generate immense amounts of sensor data. The ability to process and analyze this data to extract insights and knowledge that enable intelligent services, new ways to understand business problems, improvements of processes and decisions, is a critical capability. Hadoop is a scalable platform for compute and storage and emerged as de-facto standard for Big Data processing at Internet companies and in the scientific community. However, there is a lack of understanding of how and for what use cases these new Hadoop capabilities can be efficiently used to augment automotive applications and systems. This paper surveys use cases and applications for deploying Hadoop in the automotive industry. Over the years a rich ecosystem emerged around Hadoop comprising tools for parallel, in-memory and stream processing (most notable MapReduce and Spark), SQL and NOSQL engines (Hive, HBase), and machine learning (Mahout, MLlib). It is critical to develop an understanding of automotive applications and their characteristics and requirements for data discovery, integration, exploration and analytics. We then map these requirements to a confined technical architecture consisting of core Hadoop services and libraries for data ingest, processing and analytics. The objective of this paper is to address questions, such as: What applications and datasets are suitable for Hadoop? How can a diverse set of frameworks and tools be managed on multi-tenant Hadoop cluster? How do these tools integrate with existing relational data management systems? How can enterprise security requirements be addressed? What are the performance characteristics of these tools for real-world automotive applications? To address the last question, we utilize a standard benchmark (TPCx-HS), and two application benchmarks (SQL and machine learning) that operate on a dataset of multiple Terabytes and billions of rows.
Enhancing Scientific Researches for Public Benefit to Support Water Conservancy of the Yangtze River
The paper makes a detailed review on the development of public beneficial specialties of the Changjiang/Yangtze River Scientific Research Institute in the last decade.Innovative research achievements are presented in terms of flood control,drought relief and disaster mitigation,river regulation,water resources management,water and soil conservation,water ecology and environmental protection,river basin planning,and integrated water administration.In line with the new tasks of river regulation in the forthcoming years,considerations for the prospect of scientific researches for public benefit are put forward.
Solving the emotion paradox: categorization and the experience of emotion.
In this article, I introduce an emotion paradox: People believe that they know an emotion when they see it, and as a consequence assume that emotions are discrete events that can be recognized with some degree of accuracy, but scientists have yet to produce a set of clear and consistent criteria for indicating when an emotion is present and when it is not. I propose one solution to this paradox: People experience an emotion when they conceptualize an instance of affective feeling. In this view, the experience of emotion is an act of categorization, guided by embodied knowledge about emotion. The result is a model of emotion experience that has much in common with the social psychological literature on person perception and with literature on embodied conceptual knowledge as it has recently been applied to social psychology.
Face Recognition for Smart Environments
© 2000 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Security Services Using Blockchains: A State of the Art Survey
This paper surveys blockchain-based approaches for several security services. These services include authentication, confidentiality, privacy and access control list, data and resource provenance, and integrity assurance. All these services are critical for the current distributed applications, especially due to the large amount of data being processed over the networks and the use of cloud computing. Authentication ensures that the user is who he/she claims to be. Confidentiality guarantees that data cannot be read by unauthorized users. Privacy provides the users the ability to control who can access their data. Provenance allows an efficient tracking of the data and resources along with their ownership and utilization over the network. Integrity helps in verifying that the data has not been modified or altered. These services are currently managed by centralized controllers, for example, a certificate authority. Therefore, the services are prone to attacks on the centralized controller. On the other hand, blockchain is a secured and distributed ledger that can help resolve many of the problems with centralization. The objectives of this paper are to give insights on the use of security services for current applications, to highlight the state of the art techniques that are currently used to provide these services, to describe their challenges, and to discuss how the blockchain technology can resolve these challenges. Further, several blockchain-based approaches providing such security services are compared thoroughly. Challenges associated with using blockchain-based security services are also discussed to spur further research in this area.
Prescribing patterns in dementia: a multicentre observational study in a German network of CAM physicians
BACKGROUND Dementia is a major and increasing health problem worldwide. This study aims to investigate dementia treatment strategies among physicians specialised in complementary and alternative medicine (CAM) by analysing prescribing patterns and comparing them to current treatment guidelines in Germany. METHODS Twenty-two primary care physicians in Germany participated in this prospective, multicentre observational study. Prescriptions and diagnoses were reported for each consecutive patient. Data were included if patients had at least one diagnosis of dementia according to the 10th revision of the International Classification of Diseases during the study period. Multiple logistic regression was used to determine factors associated with a prescription of any anti-dementia drug including Ginkgo biloba. RESULTS During the 5-year study period (2004-2008), 577 patients with dementia were included (median age: 81 years (IQR: 74-87); 69% female). Dementia was classified as unspecified dementia (57.2%), vascular dementia (25.1%), dementia in Alzheimer's disease (10.4%), and dementia in Parkinson's disease (7.3%). The prevalence of anti-dementia drugs was 25.6%. The phytopharmaceutical Ginkgo biloba was the most frequently prescribed anti-dementia drug overall (67.6% of all) followed by cholinesterase inhibitors (17.6%). The adjusted odds ratio (AOR) for receiving any anti-dementia drug was greater than 1 for neurologists (AOR = 2.34; CI: 1.59-3.47), the diagnosis of Alzheimer's disease (AOR = 3.28; CI: 1.96-5.50), neuroleptic therapy (AOR = 1.87; CI: 1.22-2.88), co-morbidities hypertension (AOR = 2.03; CI: 1.41-2.90), and heart failure (AOR = 4.85; CI: 3.42-6.88). The chance for a prescription of any anti-dementia drug decreased with the diagnosis of vascular dementia (AOR = 0.64; CI: 0.43-0.95) and diabetes mellitus (AOR = 0.55; CI: 0.36-0.86). The prescription of Ginkgo biloba was associated with sex (female: AOR = 0.41; CI: 0.19-0.89), patient age (AOR = 1.06; CI: 1.02-1.10), treatment by a neurologist (AOR = 0.09; CI: 0.03-0.23), and the diagnosis of Alzheimer's disease (AOR = 0.07; CI: 0.04-0.16). CONCLUSIONS This study provides a comprehensive analysis of everyday practice for treatment of dementia in primary care in physicians with a focus on CAM. The prescribing frequency for anti-dementia drugs is equivalent to those found in other German studies, while the administration of Ginkgo biloba is significantly higher.
Aging and verbal memory span: a meta-analysis.
Using Brinley plots, this meta-analysis provides a quantitative examination of age differences in eight verbal span tasks. The main conclusions are these: (a) there are age differences in all verbal span tasks; (b) the data support the conclusion that working memory span is more age sensitive than short-term memory span; and (c) there is a linear relationship between span of younger adults and span of older adults. A linear model indicates the presence of three distinct functions, in increasing order of size of age effects: simple storage span; backward digit span; and working memory span.
Overeducation and skill mismatch : a dynamic analysis
Half of American workers have a level of education that does not match the level of education required for their job. Of these, a majority are overeducated, i.e. have more schooling than necessary to perform their job (see, e.g., Leuven & Oosterbeek, 2011). In this paper, we use data from the National Longitudinal Survey of Youth 1979 (NLSY79) combined with the pooled 1989-1991 waves of the CPS to provide some of the first evidence regarding the dynamics of overeducation over the life cyle. Shedding light on this question is key to disentangle the role played by labor market frictions versus other factors such as selection on unobservables, compensating differentials or career mobility prospects. Overall, our results suggest that overeducation is a fairly persistent phenomenon, with 79% of workers remaining overeducated after one year. Initial overeducation also has an impact on wages much later in the career, which points to the existence of scarring effects. Finally, we find some evidence of duration dependence, with a 6.5 point decrease in the exit rate from overeducation after having spent five years overeducated. JEL Classification: J24; I21 ∗Duke University †University of North Carolina at Chapel Hill and IZA ‡Duke University and IZA.
Recent advances in planar optics : from plasmonic to dielectric metasurfaces
PATRICE GENEVET, FEDERICO CAPASSO,* FRANCESCO AIETA, MOHAMMADREZA KHORASANINEJAD, AND ROBERT DEVLIN Université Côte d’Azur, CNRS, CRHEA, rue Bernard Gregory, Sophia Antipolis 06560 Valbonne, France John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA Hewlett-Packard Laboratories, Palo Alto, California 94304, USA e-mail: [email protected] *Corresponding author: [email protected]
Structured Feature Learning for Pose Estimation
In this paper, we propose a structured feature learning framework to reason the correlations among body joints at the feature level in human pose estimation. Different from existing approaches of modeling structures on score maps or predicted labels, feature maps preserve substantially richer descriptions of body joints. The relationships between feature maps of joints are captured with the introduced geometrical transform kernels, which can be easily implemented with a convolution layer. Features and their relationships are jointly learned in an end-to-end learning system. A bi-directional tree structured model is proposed, so that the feature channels at a body joint can well receive information from other joints. The proposed framework improves feature learning substantially. With very simple post processing, it reaches the best mean PCP on the LSP and FLIC datasets. Compared with the baseline of learning features at each joint separately with ConvNet, the mean PCP has been improved by 18% on FLIC. The code is released to the public.
Design, stress analysis and determination of center of gravity on stair climber wheelchair
Not all facilities in Indonesia have supporting facilities for wheelchairs in order to access these facilities freely. Obstacles encountered especially not their specific path to enter, so had to climb the stairs. The problem is the absence of a special path to enter the access, so it must climb the stairs. The movement of the wheels to climb the stairs and sitting balance system of the patient become a major problem in this study. The purpose of this research is to make prototype of mechanical system in wheelchair so that can climb stairs easily and safely. In mechanical systems, the design of the wheel using three-wheel system which is arranged in parallel form a triangle. To balance the user's position, under the seats mounted screw system as a regulator of the seat tilt angle. This mechanical design planning takes into account the determination of the Center of Gravity (CoG) to ensure stability. Hopefully, this wheelchair prototype becomes a means of supporting mobility for the disabled to move to public facilities in Indonesia.
Candidate Multilinear Maps from Ideal Lattices
We describe plausible lattice-based constructions with properties that approximate the soughtafter multilinear maps in hard-discrete-logarithm groups, and show an example application of such multi-linear maps that can be realized using our approximation. The security of our constructions relies on seemingly hard problems in ideal lattices, which can be viewed as extensions of the assumed hardness of the NTRU function. This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center (DoI/NBC) contract number D11PC20202. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the U.S. Government. ∗Research conducted while at the IBM Research, T.J. Watson funded by NSF Grant No.1017660.
Hackers - Heroes of the Computer Revolution
Now, we come to offer you the right catalogues of book to open. hackers heroes of the computer revolution is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.
What is a support vector machine?
Support vector machines (SVMs) are becoming popular in a wide variety of biological applications. But, what exactly are SVMs and how do they work? And what are their most promising applications in the life sciences?
Camera Pose Filtering with Local Regression Geodesics on the Riemannian Manifold of Dual Quaternions
Time-varying, smooth trajectory estimation is of great interest to the vision community for accurate and well behaving 3D systems. In this paper, we propose a novel principal component local regression filter acting directly on the Riemannian manifold of unit dual quaternions DH1. We use a numerically stable Lie algebra of the dual quaternions together with exp and log operators to locally linearize the 6D pose space. Unlike state of the art path smoothing methods which either operate on SO (3) of rotation matrices or the hypersphere H1 of quaternions, we treat the orientation and translation jointly on the dual quaternion quadric in the 7-dimensional real projective space RP7. We provide an outlier-robust IRLS algorithm for generic pose filtering exploiting this manifold structure. Besides our theoretical analysis, our experiments on synthetic and real data show the practical advantages of the manifold aware filtering on pose tracking and smoothing.
Stray Current Corrosion and Mitigation: A synopsis of the technical methods used in dc transit systems.
Stray-current corrosion has been a source of concern for the transit authorities and utility companies since the inception of the electrified rail transit system. The corrosion problem caused by stray current was noticed within ten years of the first dc-powered rail line in the United States in 1888 [1] in Richmond, Virginia, and ever since, the control of stray current has been a critical issue. Similarly, the effects of rail and utility-pipe corrosion caused by stray current had been observed in Europe.
Comparative research of the common face detection methods
Face detection is an important part of the face recognition, in this article the common face detection algorithms are summarized and classified, and several representative face detection algorithm are compared, respectively, the principle, advantages and disadvantages of these methods is described and finally the development trends and challenges of face detection method are presented.
Ecological-economic analysis of wetlands : scientific integration for management and policy
Wetlands all over the world have been lost or are threatened in spite of various international agreements and national policies. This is caused by: (1) the public nature of many wetlands products and services; (2) user externalities imposed on other stakeholders; and (3) policy intervention failures that are due to a lack of consistency among government policies in different areas (economics, environment, nature protection, physical planning, etc.). All three causes are related to information failures which in turn can be linked to the complexity and ‘invisibility’ of spatial relationships among groundwater, surface water and wetland vegetation. Integrated wetland research combining social and natural sciences can help in part to solve the information failure to achieve the required consistency across various government policies. An integrated wetland research framework suggests that a combination of economic valuation, integrated modelling, stakeholder analysis, and multi-criteria evaluation can provide complementary insights into sustainable and welfare-optimising wetland management and policy. Subsequently, each of the various www.elsevier.com/locate/ecolecon * Corresponding author. Tel.: +46-8-6739540; fax: +46-8-152464. E-mail address: [email protected] (T. Söderqvist). 0921-8009/00/$ see front matter © 2000 Elsevier Science B.V. All rights reserved. PII: S 0921 -8009 (00 )00164 -6 R.K. Turner et al. / Ecological Economics 35 (2000) 7–23 8 components of such integrated wetland research is reviewed and related to wetland management policy. © 2000 Elsevier Science B.V. All rights reserved.
Designing for Exploratory Search on Touch Devices
Exploratory search confront users with challenges in expressing search intents as the current search interfaces require investigating result listings to identify search directions, iterative typing, and reformulating queries. We present the design of Exploration Wall, a touch-based search user interface that allows incremental exploration and sense-making of large information spaces by combining entity search, flexible use of result entities as query parameters, and spatial configuration of search streams that are visualized for interaction. Entities can be flexibly reused to modify and create new search streams, and manipulated to inspect their relationships with other entities. Data comprising of task-based experiments comparing Exploration Wall with conventional search user interface indicate that Exploration Wall achieves significantly improved recall for exploratory search tasks while preserving precision. Subjective feedback supports our design choices and indicates improved user satisfaction and engagement. Our findings can help to design user interfaces that can effectively support exploratory search on touch devices.
Privacy Preserving Data Mining
In this paper we introduce the concept of privacy preserving data mining. In our model, two parties owning confidential databases wish to run a data mining algorithm on the union of their databases, without revealing any unnecessary information. This problem has many practical and important applications, such as in medical research with confidential patient records. Data mining algorithms are usually complex, especially as the size of the input is measured in megabytes, if not gigabytes. A generic secure multi-party computation solution, based on evaluation of a circuit computing the algorithm on the entire input, is therefore of no practical use. We focus on the problem of decision tree learning and use ID3, a popular and widely used algorithm for this problem. We present a solution that is considerably more efficient than generic solutions. It demands very few rounds of communication and reasonable bandwidth. In our solution, each party performs by itself a computation of the same order as computing the ID3 algorithm for its own database. The results are then combined using efficient cryptographic protocols, whose overhead is only logarithmic in the number of transactions in the databases. We feel that our result is a substantial contribution, demonstrating that secure multi-party computation can be made practical, even for complex problems and large inputs.
What Determines the Profitability of Banks? Evidence from Spain
This paper analyzes empirically the factors that determine the profitability of Spanish banks for the period of 1999-2009. The results obtained by applying the system-GMM estimator to a large sample of Spanish banks indicate that the high bank profitability during these years is associated with a large percentage of loans in total assets, a high proportion of customer deposits, good efficiency, and a low credit risk. In addition, higher capital ratios also increase the bank’s return, although this finding applies only when using return on assets (ROA) as the profitability measure. We find no evidence of either economies or diseconomies of scale or scope in the Spanish banking sector. On the other hand, all industry and macroeconomic determinants, with the exception of interest rate, affect bank profitability in the anticipated ways. Finally, our study reveals differences in the performance of commercial and savings banks.
HyDroid: A Hybrid Approach for Generating API Call Traces from Obfuscated Android Applications for Mobile Security
The growing popularity of Android applications makes them vulnerable to security threats. There exist several studies that focus on the analysis of the behaviour of Android applications to detect the repackaged and malicious ones. These techniques use a variety of features to model the application's behaviour, among which the calls to Android API, made by the application components, are shown to be the most reliable. To generate the APIs that an application calls is not an easy task. This is because most malicious applications are obfuscated and do not come with the source code. This makes the problem of identifying the API methods invoked by an application an interesting research issue. In this paper, we present HyDroid, a hybrid approach that combines static and dynamic analysis to generate API call traces from the execution of an application's services. We focus on services because they contain key characteristics that allure attackers to misuse them. We show that HyDroid can be used to extract API call trace signatures of several malware families.
Active damping control of a high-power PWM current-source rectifier for line-current THD reduction
The use of active damping to reduce the total harmonic distortion (THD) of the line current for medium-voltage (2.3-7.2 kV) high-power pulsewidth-modulation (PWM) current-source rectifiers is investigated. The rectifier requires an LC filter connected at its input terminals, which constitutes an LC resonant mode. The lightly damped LC filter is prone to series and parallel resonances when tuned to a system harmonic either from the utility or from the PWM rectifier. These issues are traditionally addressed at the design stage by properly choosing the filter resonant frequency. This approach may result in a limited performance since the LC resonant frequency is a function of the power system impedance, which usually varies with power system operating conditions. In this paper, an active damping control method is proposed for the reduction in line current THD of high-power current-source rectifiers operating at a switching frequency of only 540 Hz. Two types of LC resonances are investigated: the parallel resonance excited by harmonic currents drawn by the rectifier and the series resonance caused by harmonic pollution in the source voltage. It is demonstrated through simulation and experiments that the proposed active damping control can effectively reduce the line-current THD caused by both parallel and series resonances.
Scan, Attend and Read: End-to-End Handwritten Paragraph Recognition with MDLSTM Attention
We present an attention-based model for end-to-end handwriting recognition. Our system does not require any segmentation of the input paragraph. The model is inspired by the differentiable attention models presented recently for speech recognition, image captioning or translation. The main difference is the implementation of covert and overt attention with a multi-dimensional LSTM network. Our principal contribution towards handwriting recognition lies in the automatic transcription without a prior segmentation into lines, which was critical in previous approaches. Moreover, the system is able to learn the reading order, enabling it to handle bidirectional scripts such as Arabic. We carried out experiments on the well-known IAM Database and report encouraging results which bring hope to perform full paragraph transcription in the near future.
Classifying ecommerce information sharing behaviour by youths on social networking sites
Teenagers and young adults are an economically critical demographic group, and they are a confronted with an array of internet social networking services just as they are forming their online information seeking and sharing habits. Using a survey of 34,514 respondents from myYearbook.com, the research reported in this paper is an inferential analysis of information seeking and sharing behaviours in the ecommerce domain on four social networking sites (Facebook, MySpace, myYearbook, and Twitter). Using k-means clustering analysis, we find clusters within this demographic based on levels of being connected on and being engaged with social networking services. Research results show that the majority of this demographic have accounts on multiple social networking sites, with more than 40% having profiles on three social networking sites and an additional 20% have four social networking accounts. Statistical findings further show there are distinct information sharing seeking and sharing differences among eight clusters of users. We also investigate the motivations for using different social media sites, showing that the reasons for engaging differs among sites, indicating continued use of multiple social networking services. Implications are that companies and organizations interested in marketing to this demographic cannot cluster social networking users for more personalized targeting of advertisements and other information. Findings also show that this youth demographic has complex ecommerce information behaviours that call for nuanced approaches in advertising, marketing, or other areas of information targeting and that the traditional web advertising model may not be an appropriate information dissemination method.
Library for Flight Dynamics Modelling of a mini-UAV
This paper presents using of JSBSim library for flight dynamics modelling of a mini-UAV (Unmanned Aerial Vehicle). The first part of the paper is about general information of UAVs and about the fundamentals of airplane flight mechanics, forces, moments, and the main components of typical aircraft. The main section briefly describes a flight dynamics model and summarizes the information about JSBSim library. Then, a way of using the library for the modelling of a mini-UAV is shown. A basic script for lifting and stabilization of the UAV has been developed and described. Finally, the results of JSBSim test are discussed.
Collaborative Filtering with Implicit Feedbacks by Discounting Positive Feedbacks
Recommender Systems are indispensable to provide personalized services on the Web. Recommending items which match a user's preference has been researched for a long time, and there exist a lot of useful approaches. Especially, Collaborative Filtering, which gives recommendation based on users' feedbacks to items, is considered useful. Feedbacks are categorized into explicit feedbacks and implicit feedbacks. In this paper, Collaborative Filtering with implicit feedbacks is addressed. Explicit feedbacks are feedbacks provided by users intentionally and represent users' preferences for items explicitly. For example, in Netflix, users can rate movies on a scale of 1-5, and, based on these ratings, users can receive movie recommendation. On the other hand, implicit feedbacks are collected by the system automatically. In Amazon.com, products that users buy and click are used for recommendation. While Collaborative Filtering with explicit feedbacks has been a central topic for a long time, implicit feedbacks have become a more and more important research topic recently because these are easier to obtain and more abundant than explicit feedbacks. However, implicit feedbacks are often noisy. They often contain feedbacks which do not represent users' real preferences for items. Our approach addresses to this noise problem. We propose three discounting methods for observed values in implicit feedbacks. The key idea is that there is hidden uncertainty for each observed feedback, and effects by observed feedbacks of much uncertainty are discounted. The three discounting methods do not need additional information besides ordinary user-item feedbacks pairs and timestamps. Experiments with huge real-world datasets confirm that all of the three methods contribute to improving the performance. Moreover, our discounting methods can easily be combined with existing methods and improve the recommendation accuracy of existing models.
[Factors Associated with Direct Oral Anticoagulants versus Vitamin K Antagonists in Patients with Non-valvular Atrial Fibrillation].
OBJECTIVE Describing the factors associated with direct oral anticoagulants (DOA) prescription in patients with atrial fibrillation (AF). METHOD This study was performed in Toulouse on a cohort of patients received in rhythmology consultation, treated with vitamin K antagonists (VKA) or DOA for AF. A multivariate model was performed using logistic regression to describe the factors associated with DOA prescription and secondly, those associated with discontinuation of the anticoagulant. RESULTS Among the 140 patients included, 96 (66%) were treated with VKA and 48 (34%) with DOA. Recent AF diagnosis (OR 7.52, 95% CI [2.41;23.29], p = 0.001), previous exposure to VKA (OR 17.11, 95% CI [4.48;60.91], p<0.001), and no current exposure to anti-platelet agents (APA) (OR 7.69, 95% CI [1.22; 50.00], p = 0.030) were associated to DOA prescription. Discontinuation of the anticoagulant (n=24) was associated to DOA intake (OR 2.71, 95% CI [1.21; 6.08], p = 0.016). DISCUSSION DOA are less prescribed than VKA in patients treated with APA. DOA switch to VKA was not systematic in patients diagnosed for a long time. However, international normalized ratio (INR) values were stable in most of patients treated with VKA at the switching to DOA. A more powerful study would confirm the factors associated with DOA prescription.
Overfitting and Neural Networks: Conjugate Gradient and Backpropagation
Methods for controlling the bias/variance tradeoff typica lly assume that overfitting or overtraining is a global phenomenon. For multi-layer perceptron (MLP) neural netwo rks, global parameters such as the training time (e.g. based on validation tests), network size, or the amount of we ight decay are commonly used to control the bias/variance tradeoff. However, the degree of overfitting can vary signifi cantly throughout the input space of the model. We show that overselection of the degrees of freedom for an MLP train ed with backpropagation can improve the approximation in regions of underfitting, while not significantly overfitti ng in other regions. This can be a significant advantage over other models. Furthermore, we show that “better” learning a lgorithms such as conjugate gradient can in fact lead to worse generalization, because they can be more prone to crea ting v rying degrees of overfitting in different regions of the input space. While experimental results cannot cover all practical situations, our results do help to explain common behavior that does not agree with theoretical expect ations. Our results suggest one important reason for the relative success of MLPs, bring into question common bel iefs about neural network training regarding training algorithms, overfitting, and optimal network size, suggest alternate guidelines for practical use (in terms of the trai ning algorithm and network size selection), and help to direct fu ture work (e.g. regarding the importance of the MLP/BP training bias, the possibility of worse performance for “be tter” training algorithms, local “smoothness” criteria, a nd further investigation of localized overfitting).
District leaders as open networks: emerging business strategies in Italian industrial districts
Italian industrial districts are no longer self-contained systems of small firms, where firms' competitiveness is the result of physical proximity and links with global economy are limited to export sales. A new generation of firms is taking the lead, reshaping the form of districts through their innovative strategies focused on R&D, design and ICT. Most of these firms are leaders within their markets and organize their value chains by coupling district knowledge and competencies with opportunities offered by globalization processes. The rise of these open networks contributes to the transformation of industrial districts and the real drivers of the district firm's competitiveness. Based on a survey of 650 Italian SMEs from 41 Italian districts, the paper describes the characteristics of this new firm model, compared to the traditional district one. The paper also discusses implications for districts in terms of innovation dynamics and governance.
Comparison of mouse and rat cytochrome P450-mediated metabolism in liver and intestine.
The liver is considered to be the major site of first-pass metabolism, but the small intestine is also able to contribute significantly. The improvement of existing in vitro techniques and the development of new ones, such as intestinal slices, allow a better understanding of the intestine as a metabolic organ. In this paper, the formation of metabolites of several human CYP3A substrates by liver and intestinal slices from rat and mouse was compared. The results show that liver slices exhibited a higher metabolic rate for the majority of the studied substrates, but some metabolites were produced at a higher rate by intestinal slices, compared with liver slices. Coincubation with ketoconazole inhibited the metabolic conversion in intestinal slices almost completely, but inhibition was variable in liver slices. To better understand the role of CYP3A in mice, we studied the relative mRNA expression of different CYP3A isoforms in intestine and liver from mice because, in this species, CYP3A expression has not been well described in these organs. It was found that in mice, CYP3A13 is more expressed in the intestine, whereas CYP3A11, CYP3A25, and CYP3A41 are more expressed in the liver, comparable to similar findings in the rat. Altogether, these data demonstrate that, in addition to liver, the intestine from mouse and rat may have an important role in the process of first-pass metabolism, depending on the substrate. Moreover, we show that intestinal slices are a useful in vitro technique to study gut metabolism.
Emotion malleability beliefs, emotion regulation, and psychopathology: Integrating affective and clinical science.
Beliefs that individuals hold about whether emotions are malleable or fixed, also referred to as emotion malleability beliefs, may play a crucial role in individuals' emotional experiences and their engagement in changing their emotions. The current review integrates affective science and clinical science perspectives to provide a comprehensive review of how emotion malleability beliefs relate to emotionality, emotion regulation, and specific clinical disorders and treatment. Specifically, we discuss how holding more malleable views of emotion could be associated with more active emotion regulation efforts, greater motivation to engage in active regulatory efforts, more effort expended regulating emotions, and lower levels of pathological distress. In addition, we explain how extending emotion malleability beliefs into the clinical domain can complement and extend current conceptualizations of major depressive disorder, social anxiety disorder, and generalized anxiety disorder. This may prove important given the increasingly central role emotion dysregulation has been given in conceptualization and intervention for these psychiatric conditions. Additionally, discussion focuses on how emotion beliefs could be more explicitly addressed in existing cognitive therapies. Promising future directions for research are identified throughout the review.
The Effect of Identifying Vulnerabilities and Patching Software on the Utility of Network Intrusion Detection
Vulnerability scanning and installing software patches for known vulnerabilities greatly affects the utility of network-based intrusion detection systems that use signatures to detect system compromises. A detailed timeline analysis of important remote-to-local vulnerabilities demonstrates (1) Vulnerabilities in widely-used server software are discovered infrequently (at most 6 times a year) and (2) Software patches to prevent vulnerabilities from being exploited are available before or simultaneously with signatures. Signature-based intrusion detection systems will thus never detect successful system compromises on small secure sites when patches are installed as soon as they are available. Network intrusion detection systems may detect successful system compromises on large sites where it is impractical to eliminate all known vulnerabilities. On such sites, information from vulnerability scanning can be used to prioritize the large numbers of extraneous alerts caused by failed attacks and normal background traffic. On one class B network with roughly 10 web servers, this approach successfully filtered out 95% of all remote-to-local alerts.
LASyM: A Learning Analytics System for MOOCs
Nowadays, the Web has revolutionized our vision as to how deliver courses in a radically transformed and enhanced way. Boosted by Cloud computing, the use of the Web in education has revealed new challenges and looks forward to new aspirations such as MOOCs (Massive Open Online Courses) as a technology-led revolution ushering in a new generation of learning environments. Expected to deliver effective education strategies, pedagogies and practices, which lead to student success, the massive open online courses, considered as the “linux of education”, are increasingly developed by elite US institutions such MIT, Harvard and Stanford by supplying open/distance learning for large online community without paying any fees, MOOCs have the potential to enable free university-level education on an enormous scale. Nevertheless, a concern often is raised about MOOCs is that a very small proportion of learners complete the course while thousands enrol for courses. In this paper, we present LASyM, a learning analytics system for massive open online courses. The system is a Hadoop based one whose main objective is to assure Learning Analytics for MOOCs’ communities as a mean to help them investigate massive raw data, generated by MOOC platforms around learning outcomes and assessments, and reveal any useful information to be used in designing learning-optimized MOOCs. To evaluate the effectiveness of the proposed system we developed a method to identify, with low latency, online learners more likely to drop out. Keywords—Cloud Computing; MOOCs; Hadoop; Learning
Mining Mid-level Visual Patterns with Deep CNN Activations
The purpose of mid-level visual element discovery is to find clusters of image patches that are representative of, and which discriminate between, the contents of the relevant images. Here we propose a pattern-mining approach to the problem of identifying mid-level elements within images, motivated by the observation that such techniques have been very effective, and efficient, in achieving similar goals when applied to other data types. We show that Convolutional Neural Network (CNN) activations extracted from image patches typical possess two appealing properties that enable seamless integration with pattern mining techniques. The marriage between CNN activations and a pattern mining technique leads to fast and effective discovery of representative and discriminative patterns from a huge number of image patches, from which mid-level elements are retrieved. Given the patterns and retrieved mid-level visual elements, we propose two methods to generate image feature representations. The first encoding method uses the patterns as codewords in a dictionary in a manner similar to the Bag-of-Visual-Words model. We thus label this a Bag-of-Patterns representation. The second relies on mid-level visual elements to construct a Bag-of-Elements representation. We evaluate the two encoding methods on object and scene classification tasks, and demonstrate that our approach outperforms or matches the performance of the state-of-the-arts on these tasks.
Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations
In this paper, we describe the design of Neurogrid, a neuromorphic system for simulating large-scale neural models in real time. Neuromorphic systems realize the function of biological neural systems by emulating their structure. Designers of such systems face three major design choices: 1) whether to emulate the four neural elements-axonal arbor, synapse, dendritic tree, and soma-with dedicated or shared electronic circuits; 2) whether to implement these electronic circuits in an analog or digital manner; and 3) whether to interconnect arrays of these silicon neurons with a mesh or a tree network. The choices we made were: 1) we emulated all neural elements except the soma with shared electronic circuits; this choice maximized the number of synaptic connections; 2) we realized all electronic circuits except those for axonal arbors in an analog manner; this choice maximized energy efficiency; and 3) we interconnected neural arrays in a tree network; this choice maximized throughput. These three choices made it possible to simulate a million neurons with billions of synaptic connections in real time-for the first time-using 16 Neurocores integrated on a board that consumes three watts.
Emotional Intelligence in the Classroom : Skill-Based Training for Teachers and Students
Successful schools ensure that all students master basic skills such as reading and math and have strong backgrounds in other subject areas, including science, history, and foreign language. Recently, however, educators and parents have begun to support a broader educational agenda – one that enhances teachers’ and students’ social and emotional skills. Research indicates that social and emotional skills are associated with success in many areas of life, including effective teaching, student learning, quality relationships, and academic performance. Moreover, a recent meta-analysis of over 300 studies showed that programs designed to enhance social and emotional learning significantly improve students’ social and emotional competencies as well as academic performance. Incorporating social and emotional learning programs into school districts can be challenging, as programs must address a variety of topics in order to be successful. One organization, the Collaborative for Academic, Social, and Emotional Learning (CASEL), provides leadership for researchers, educators, and policy makers to advance the science and practice of school-based social and emotional learning programs. According to CASEL, initiatives to integrate programs into schools should include training on social and emotional skills for both teachers and students, and should receive backing from all levels of the district, including the superintendent, school principals, and teachers. Additionally, programs should be field-tested, evidence-based, and founded on sound
Simulation and the design building block approach in the design of ships and other complex systems
This paper is in many respects a continuation of the earlier paper by the author published in Proc. R. Soc. A in 1998 entitled ‘A comprehensive methodology for the design of ships (and other complex systems)’. The earlier paper described the approach to the initial design of ships developedby the author during some 35years of design practice, including two previous secondments to teach ship design atUCL.Thepresent paper not only takes thatdevelopment forward, it also explains how the research tool demonstrating the author’s approach to initial ship design has now been incorporated in an industry based design system to provide a working graphically and numerically integrated design system. This achievement is exemplified by a series of practical design investigations, undertaken by the UCL Design Research Centre led by the author, which were mainly undertaken for industry clients in order to investigate real problems towhich the approachhasbrought significant insights.The other new strand in the present paper is the emphasis on the human factors or large scale ergonomics dimension, vital to complex and large scale design products but rarely hitherto beengiven sufficientprominence in the crucial formative stagesof large scale designbecauseof the inherent difficulties in doing so. The UCL Design Building Block approach has now been incorporated in the established PARAMARINE ship design system through a module entitled SURFCON. Work is now underway on an Engineering and Physical Sciences Research Council joint project with the University of Greenwich to interface the latter’s escape simulation toolmaritimeEXODUSwithSURFCONtoprovide initial design guidance to ship designers on personnelmovement. The paper’s concluding section considers the wider applicability of the integration of simulation during initial design with the graphically driven synthesis to other complex and large scale design tasks. The paper concludes by suggesting how such an approach to complex design can contribute to the teaching of designers and, moreover, how this designapproach can enable a creative qualitative approach to engineering design to be sustained despite the risk that advances in computer based methods might encourage emphasis being accorded to solely to quantitative analysis.
A Trustless Privacy-Preserving Reputation System
Reputation systems are crucial for distributed applications in which users have to be made accountable for their actions, such as ecommerce websites. However, existing systems often disclose the identity of the raters, which might deter honest users from submitting reviews out of fear of retaliation from the ratees. While many privacy-preserving reputation systems have been proposed, we observe that none of them is simultaneously truly decentralized, trustless, and suitable for real world usage in, for example, e-commerce applications. In this paper, we present a blockchain based decentralized privacy-preserving reputation system. We demonstrate that our system provides correctness and security while eliminating the need for users to trust any third parties or even fellow users.
Sales prediction for a pharmaceutical distribution company: A data mining based approach
For pharmaceutical distribution companies it is essential to obtain good estimates of medicine needs, due to the short shelf life of many medicines and the need to control stock levels, so as to avoid excessive inventory costs while guaranteeing customer demand satisfaction, and thus decreasing the possibility of loss of customers due to stock outages. In this paper we explore the use of the time series data mining technique for the sales prediction of individual products of a pharmaceutical distribution company in Portugal. Through data mining techniques, the historical data of product sales are analyzed to detect patterns and make predictions based on the experience contained in the data. The results obtained with the technique as well as with the proposed method suggest that the performed modelling may be considered appropriate for the short term product sales prediction.
Non-negative Matrix Factorization with Sparseness Constraints
Non-negative matrix factorization (NMF) is a recently deve loped technique for finding parts-based, linear representations of non-negative data. Although it h as successfully been applied in several applications, it does not always result in parts-based repr esentations. In this paper, we show how explicitly incorporating the notion of ‘sparseness’ impro ves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF a nd for our extension. Our hope is that this will further the application of these methods to olving novel data-analysis problems.
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 × 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing/adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.
Human posture recognition using human skeleton provided by Kinect
Human posture recognition is an attractive and challenging topic in computer vision because of its wide range of application. The coming of low cost device Kinect with its SDK gives us a possibility to resolve with ease some difficult problems encountered when working with conventional cameras. In this paper, we explore the capacity of using skeleton information provided by Kinect for human posture recognition in a context of a health monitoring framework. We conduct 7 different experiments with 4 types of features extracted from human skeleton. The obtained results show that this device can detect with high accuracy four interested postures (lying, sitting, standing, bending).
Predictors of coping in parents of children with an intellectual disability: comparison between Lebanese mothers and fathers.
This cross-sectional study was designed to assess the predictors of coping behaviors of 147 Lebanese parents (101 mothers and 46 fathers) with a child with intellectual disability. It assessed the contribution of child's and parent's characteristics, informal social support, and stress on the coping behaviors of fathers and mothers. Multiple regression analysis confirmed that the father's education, informal social support, and stress were the best predictors of coping. The child's age, severity of illness, and parental health did not significantly contribute to predicting coping behaviors. Contrary to expectations in a Middle Eastern culture, both fathers and mothers reported similar levels of stress, perceived informal social support, and coping. Although informal social support cannot be forced on parents, health professionals can mobilize resources that are culturally sensitive, such as home visitation by nurses or support from other parents. This may especially be beneficial in developing countries with limited resources.
Investigation into high-temperature corrosion in a large-scale municipal waste-to-energy plant
High-temperature corrosion in the superheater of a large-scale waste-to-energy plant was investigated. A comparison of nickel-/iron-based alloys and austenitic stainless steel probes placed in the furnace demonstrated that temperature and particle deposition greatly influence corrosion. Nickel-based alloys performed better than the other metal alloys, though an aluminide coating further increased their corrosion resistance. Sacrificial baffles provided additional room for deposit accumulation, resulting in vigorous deposit-induced corrosion. Computational modelling (FLUENT code) was used to simulate flow characteristics and heat transfer. This study has shown that the use of aluminide coatings is a promising technique for minimising superheater corrosion in such facilities.
Motivation in online learning: Testing a model of self-determination theory
As high attrition rates becomes a pressing issue of online learning and a major concern of online educators, it is important to investigate online learner motivation, including its antecedents and outcomes. Drawing on Deci and Ryan’s self-determination theory, this study proposed and tested a model for online learner motivation in two online certificate programs (N = 262). Results from structural equation modeling provided evidence for the mediating effect of need satisfaction between contextual support and motivation/self-determination; however, motivation/self-determination failed to predict learning outcomes. Additionally, this study supported SDT’s main theorizing that intrinsic motivation, extrinsic motivation, and amotivation are distinctive constructs, and found that the direct effect and indirect effects of contextual support exerted opposite impacts on learning outcomes. Implications for online learner support were discussed. 2010 Elsevier Ltd. All rights reserved.
Easy over hard: a case study on deep learning
While deep learning is an exciting new technique, the benefits of this method need to be assessed with respect to its computational cost. This is particularly important for deep learning since these learners need hours (to weeks) to train the model. Such long training time limits the ability of (a)~a researcher to test the stability of their conclusion via repeated runs with different random seeds; and (b)~other researchers to repeat, improve, or even refute that original work. For example, recently, deep learning was used to find which questions in the Stack Overflow programmer discussion forum can be linked together. That deep learning system took 14 hours to execute. We show here that applying a very simple optimizer called DE to fine tune SVM, it can achieve similar (and sometimes better) results. The DE approach terminated in 10 minutes; i.e. 84 times faster hours than deep learning method. We offer these results as a cautionary tale to the software analytics community and suggest that not every new innovation should be applied without critical analysis. If researchers deploy some new and expensive process, that work should be baselined against some simpler and faster alternatives.
Emotional intelligence and social and academic adaptation to school.
In a sample of 127 Spanish adolescents, the ability to understand and manage emotions, assessed by a performance measure of emotional intelligence (the MSCEIT), correlated positively with teacher ratings of academic achievement and adaptation for both males and females. Among girls, these emotional abilities also correlated positively with peer friendship nominations. After controlling for IQ and the Big Five personality traits, the ability to understand and manage emotions remained significantly associated with teacher ratings of academic adaptation among boys and peer friendship nominations among girls. Self-perceived emotional intelligence was unrelated to these criteria. These findings provide partial support for hypotheses that emotional abilities are associated with indicators of social and academic adaptation to school.
Hierarchical Parsing Net: Semantic Scene Parsing From Global Scene to Objects
This paper proposes a novel Hierarchical Parsing Net (HPN) for semantic scene parsing. Unlike previous methods, which separately classify each object, HPN leverages global scene semantic information and the context among multiple objects to enhance scene parsing. On the one hand, HPN uses the global scene category to constrain the semantic consistency between the scene and each object. On the other hand, the context among all objects is also modeled to avoid incompatible object predictions. Specifically, HPN consists of four steps. In the first step, we extract scene and local appearance features. Based on these appearance features, the second step is to encode a contextual feature for each object, which models both the scene-object context (the context between the scene and each object) and the interobject context (the context among different objects). In the third step, we classify the global scene and then use the scene classification loss and a backpropagation algorithm to constrain the scene feature encoding. In the fourth step, a label map for scene parsing is generated from the local appearance and contextual features. Our model outperforms many state-of-the-art deep scene parsing networks on five scene parsing databases.
Weakly-Supervised Acquisition of Labeled Class Instances using Graph Random Walks
We present a graph-based semi-supervised label propagation algorithm for acquiring opendomain labeled classes and their instances from a combination of unstructured and structured text sources. This acquisition method significantly improves coverage compared to a previous set of labeled classes and instances derived from free text, while achieving comparable precision.
Data preprocessing algorithm for Web Structure Mining
World Wide Web is an extremely large collection of information, i.e. beyond our imagination. It provides enough information according to user's need. Web is rising dreadfully as approximately 70 million pages are added daily. Knowledge Discovery on web data is referred as Web Mining. Web Structure Mining based on the analysis of patterns from hyperlink structure in the web. Like as Data Mining, Web Mining has four stages i.e. Data Collection, Preprocessing, Knowledge Discovery and Knowledge Analysis. This paper based on the first two stages Data collection and Preprocessing. Data collection is to collect the data required for analysis. Data preprocessing is considered as an important stage of Web Structure mining because of data available on web is unstructured, heterogeneous and noisy.
Bioavailability and bioefficacy of polyphenols in humans. I. Review of 97 bioavailability studies.
Polyphenols are abundant micronutrients in our diet, and evidence for their role in the prevention of degenerative diseases is emerging. Bioavailability differs greatly from one polyphenol to another, so that the most abundant polyphenols in our diet are not necessarily those leading to the highest concentrations of active metabolites in target tissues. Mean values for the maximal plasma concentration, the time to reach the maximal plasma concentration, the area under the plasma concentration-time curve, the elimination half-life, and the relative urinary excretion were calculated for 18 major polyphenols. We used data from 97 studies that investigated the kinetics and extent of polyphenol absorption among adults, after ingestion of a single dose of polyphenol provided as pure compound, plant extract, or whole food/beverage. The metabolites present in blood, resulting from digestive and hepatic activity, usually differ from the native compounds. The nature of the known metabolites is described when data are available. The plasma concentrations of total metabolites ranged from 0 to 4 mumol/L with an intake of 50 mg aglycone equivalents, and the relative urinary excretion ranged from 0.3% to 43% of the ingested dose, depending on the polyphenol. Gallic acid and isoflavones are the most well-absorbed polyphenols, followed by catechins, flavanones, and quercetin glucosides, but with different kinetics. The least well-absorbed polyphenols are the proanthocyanidins, the galloylated tea catechins, and the anthocyanins. Data are still too limited for assessment of hydroxycinnamic acids and other polyphenols. These data may be useful for the design and interpretation of intervention studies investigating the health effects of polyphenols.
Has the biobank bubble burst? Withstanding the challenges for sustainable biobanking in the digital era
Biobanks have been heralded as essential tools for translating biomedical research into practice, driving precision medicine to improve pathways for global healthcare treatment and services. Many nations have established specific governance systems to facilitate research and to address the complex ethical, legal and social challenges that they present, but this has not lead to uniformity across the world. Despite significant progress in responding to the ethical, legal and social implications of biobanking, operational, sustainability and funding challenges continue to emerge. No coherent strategy has yet been identified for addressing them. This has brought into question the overall viability and usefulness of biobanks in light of the significant resources required to keep them running. This review sets out the challenges that the biobanking community has had to overcome since their inception in the early 2000s. The first section provides a brief outline of the diversity in biobank and regulatory architecture in seven countries: Australia, Germany, Japan, Singapore, Taiwan, the UK, and the USA. The article then discusses four waves of responses to biobanking challenges. This article had its genesis in a discussion on biobanks during the Centre for Health, Law and Emerging Technologies (HeLEX) conference in Oxford UK, co-sponsored by the Centre for Law and Genetics (University of Tasmania). This article aims to provide a review of the issues associated with biobank practices and governance, with a view to informing the future course of both large-scale and smaller scale biobanks.
Continuous Blood Pressure Measurement From Invasive to Unobtrusive: Celebration of 200th Birth Anniversary of Carl Ludwig
The year 2016 marks the 200th birth anniversary of Carl Friedrich Wilhelm Ludwig (1816-1895). As one of the most remarkable scientists, Ludwig invented the kymograph, which for the first time enabled the recording of continuous blood pressure (BP), opening the door to the modern study of physiology. Almost a century later, intraarterial BP monitoring through an arterial line has been used clinically. Subsequently, arterial tonometry and volume clamp method were developed and applied in continuous BP measurement in a noninvasive way. In the last two decades, additional efforts have been made to transform the method of unobtrusive continuous BP monitoring without the use of a cuff. This review summarizes the key milestones in continuous BP measurement; that is, kymograph, intraarterial BP monitoring, arterial tonometry, volume clamp method, and cuffless BP technologies. Our emphasis is on recent studies of unobtrusive BP measurements as well as on challenges and future directions.
Genetic Algorithm and Simulated Annealing to estimate optimal process parameters of the abrasive waterjet machining
In this study, two computational approaches, Genetic Algorithm and Simulated Annealing, are applied to search for a set of optimal process parameters value that leads to the minimum value of machining performance. The objectives of the applied techniques are: (1) to estimate the minimum value of the machining performance compared to the machining performance value of the experimental data and regression modeling, (2) to estimate the optimal process parameters values that has to be within the range of the minimum and maximum coded values for process parameters of experimental design that are used for experimental trial and (3) to evaluate the number of iteration generated by the computational approaches that lead to the minimum value of machining performance. Set of the machining process parameters and machining performance considered in this work deal with the real experimental data of the non-conventional machining operation, abrasive waterjet. The results of this study showed that both of the computational approaches managed to estimate the optimal process parameters, leading to the minimum value of machining performance when compared to the result of real experimental data.
On the Limitations of Unsupervised Bilingual Dictionary Induction
Unsupervised machine translation—i.e., not assuming any cross-lingual supervision signal, whether a dictionary, translations, or comparable corpora—seems impossible, but nevertheless, Lample et al. (2018a) recently proposed a fully unsupervised machine translation (MT) model. The model relies heavily on an adversarial, unsupervised alignment of word embedding spaces for bilingual dictionary induction (Conneau et al., 2018), which we examine here. Our results identify the limitations of current unsupervised MT: unsupervised bilingual dictionary induction performs much worse on morphologically rich languages that are not dependent marking, when monolingual corpora from different domains or different embedding algorithms are used. We show that a simple trick, exploiting a weak supervision signal from identical words, enables more robust induction, and establish a near-perfect correlation between unsupervised bilingual dictionary induction performance and a previously unexplored graph similarity metric.
A Likelihood-Based Qualitative Flexible Approach with Hesitant Fuzzy Linguistic Information
The qualitative flexible multiple criteria method (QUALIFLEX) is a useful outranking method for multi-criteria decision analysis due to its flexibility in regard to cardinal and ordinal information. This paper puts forward an extended QUALIFLEX approach with a new likelihood-based comparison method to address multi-criteria decision-making problems in a hesitant fuzzy linguistic environment. The rankings produced by our new comparison method are more convincing than those obtained by existing methods, such as likelihood, distance measures, and the score function of hesitant fuzzy linguistic term sets or hesitant fuzzy linguistic elements. The proposed QUALIFLEX model, which is based on the likelihood-based comparison method, can measure the level of concordance or discordance of the complete preference order for tackling multi-criteria decision-making problems. Finally, two cases are presented as a comparative analysis between the proposed approach and other related methods. This example demonstrates the effectiveness and flexibility of the proposed methodology in the context of hesitant fuzzy linguistic information.
Resilience of students' passwords against attacks
Passwords are still the predominant mode of authentication in contemporary information systems, despite a long list of problems associated with their insecurity. Their primary advantage is the ease of use and the price of implementation, compared to other systems of authentication (e.g. two-factor, biometry, …). In this paper we present an analysis of passwords used by students of one of universities and their resilience against brute force and dictionary attacks. The passwords were obtained from a university's computing center in plaintext format for a very long period - first passwords were created before 1980. The results show that early passwords are extremely easy to crack: the percentage of cracked passwords is above 95 % for those created before 2006. Surprisingly, more than 40 % of passwords created in 2014 were easily broken within a few hours. The results show that users - in our case students, despite positive trends, still choose easy to break passwords. This work contributes to loud warnings that a shift from traditional password schemes to more elaborate systems is needed.
Endoscopic Guided Biliary Drainage: How Can We Achieve Efficient Biliary Drainage?
Currently, endoscopic retrograde cholangiopancreatography (ERCP) is the preferred procedure for biliary drainage for various pancreatico-biliary disorders. ERCP is successful in 90% of the cases, but is unsuccessful in cases with altered anatomy or with tumors obstructing access to the duodenum. Due to the morbidity and mortality associated with surgical or percutaneous approaches in unsuccessful ERCP cases, biliary endoscopists have been using endoscopic ultrasound-guided biliary drainage (EUS-BD) more frequently within the last decade in different countries. As with any novel advanced endoscopic procedure that incorporates various approaches, advanced endoscopists all over the world have innovated and adopted diverse EUS-BD techniques. Indications for EUS-BD include failed conventional ERCP, altered anatomy, tumor preventing access into the biliary tree and contraindication to percutaneous access (i.e., ascites, etc.). EUS-BD utilizing EUS-guided rendezvous technique is conducted by creating a tract from either the stomach or the duodenum into the bile duct. Although EUS-BD has rapidly been gaining attraction and popularity in the endoscopic world, the indications and methods have yet to be standardized. There are several access routes and techniques that are employed by advanced endoscopists throughout the world for BD. This article reviews the indications and currently practiced EUS-BD techniques, including indications, technical details (intrahepatic or extrahepatic approach), equipment, patient selection, complications, and overall advantages and limitations.
A Minimal Solution for Non-perspective Pose Estimation from Line Correspondences
In this paper, we study and propose solutions to the relatively un-investigated non-perspective pose estimation problem from line correspondences. Specifically, we represent the 2D and 3D line correspondences as Plücker lines and derive the minimal solution for the minimal problem of three line correspondences with Gröbner basis. Our minimal 3-Line algorithm that gives up to eight solutions is well-suited for robust estimation with RANSAC. We show that our algorithm works as a leastsquares that takes in more than three line correspondences without any reformulation. In addition, our algorithm does not require initialization in both the minimal 3-Line and least-squares n-Line cases. Furthermore, our algorithm works without a need for reformulation under the special case of perspective pose estimation when all line correspondences are observed from one single camera. We verify our algorithms with both simulated and real-world data.
The Life Satisfaction Advantage of Being Married and Gender Specialization
This investigation examined whether the life satisfaction advantage of married over unmarried persons decreased over the last three decades, and whether the changes in the contextual gender specialization explained this trend. The author used representative data from the World Values Survey–European Values Study (WVS–EVS)-integrated data set for 87 countries (N = 292,525) covering a period of 29 years. Results showed that the life satisfaction advantage of being married decreased among men but not among women. The analysis did not support the hypothesis that contextual gender specialization shaped the observed trend. Only in developed countries the declining contextual specialization correlated with smaller life satisfaction advantage of being married. This evidence suggests that the advantages of marriage are greater under conditions that support freedom of choice rather than economic necessity. (130 words)
Shot By Both Sides: Art-Science And The War Between Science And The Humanities
There is a fundamental philosophical split between the modern culture of science and the postmodern culture of the humanities. This cultural estrangement is, among other things, the underlying cause for the lack of acceptance of art-science and technology-based art in the mainstream art world. However, in the last two decades the study of complexity has introduced a revolution across the sciences. It is suggested here that complexity thinking can be extended to usher in a revolution in the humanities as well. The apparently irreconcilable world views of modernism and postmodernism can be subsumed and unified by a new synthesis called complexism. And artists working on the complexity frontier can serve a key role in helping to bring this about.
Network motifs: simple building blocks of complex networks.
Complex networks are studied across many fields of science. To uncover their structural design principles, we defined "network motifs," patterns of interconnections occurring in complex networks at numbers that are significantly higher than those in randomized networks. We found such motifs in networks from biochemistry, neurobiology, ecology, and engineering. The motifs shared by ecological food webs were distinct from the motifs shared by the genetic networks of Escherichia coli and Saccharomyces cerevisiae or from those found in the World Wide Web. Similar motifs were found in networks that perform information processing, even though they describe elements as different as biomolecules within a cell and synaptic connections between neurons in Caenorhabditis elegans. Motifs may thus define universal classes of networks. This approach may uncover the basic building blocks of most networks.
Deep Learning with Differential Privacy
Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
Nebivolol Effects on Nitric Oxide Levels, Blood Pressure, and Renal Function in Kidney Transplant Patients.
In hypertensive kidney transplant recipients, the effects of nebivolol vs metoprolol on nitric oxide (NO) blood level, estimated glomerular filtration rate (eGFR), and blood pressure (BP) have not been previously reported. In a 12-month prospective, randomized, open-label, active-comparator trial, hypertensive kidney transplant recipients were treated with nebivolol (n=15) or metoprolol (n=15). Twenty-nine patients (nebivolol [n=14], metoprolol [n=15]) completed the trial. The primary endpoint was change in blood NO level after 12 months of treatment. Secondary endpoints were changes in eGFR, BP, and number of antihypertensive drug classes used. After 12 months of treatment, least squares mean change in plasma NO level in the nebivolol kidney transplant recipient group younger than 50 years was higher by 68.19% (99.17% confidence interval [CI], 13.02-123.36), 69.54% (99.17% CI, 12.71-126.37), and 66.80% (99.17% CI, 12.95-120.64) compared with the metoprolol group younger than 50 years, the metoprolol group 50 years and older, and the nebivolol group 50 years and older, respectively. The baseline to month 12 change in mean arterial BP, eGFR, and number of antihypertensive drug classes used was not significantly different between the treatment groups. In hypertensive kidney transplant recipients, nebivolol use in patients younger than 50 years increased blood NO.
School environment factors were associated with BMI among adolescents in Xi'an City, China
BACKGROUND School environment influences students' behaviours. The purpose of this research was to identify school environment factors associated with BMI. METHODS A cross-sectional study was conducted among 1792 school-aged adolescents from 30 schools in six districts in Xi'an City in 2004. Height and weight were taken from students by trained field staff. School environment characteristics such as physical factors (school facilities, school shops and fast food outlets in school area), school curricula and policies were collected from school doctors using school environment questionnaire. School environment factors were identified in linear mixed effect models with BMI as outcome and adjusted for socio-demographic factors. RESULTS After adjusted for socio-demographic factors, BMI was associated with the availability of soft drinks at school shops, the availability and the number of western food outlet in the school vicinity. School curricula such as sports-meeting and health education session were also associated with BMI. CONCLUSIONS Urgent actions are needed to address the obesogenic elements of school environments. Community and school policy makers should make efforts for students to avoid exposure to fast food outlet in school area and soft drinks at school shops, and to improve school curricula to promote healthy behaviours.
Multi-policy optimization in decentralized autonomic systems
Autonomic computing systems are those that are capable of managing themselves based only on highlevel objectives given by humans. In such systems the details of how to meet their objectives, even in the face of changing operating conditions, are left to the systems themselves. Therefore, autonomic systems are required to be able to self-optimize, self-heal, self-protect, and self-configure. Enabling autonomic behaviour is particularly challenging in decentralized autonomic systems, where central control is not tractable or even possible, due to the large number and geographical dispersion of the entities involved. In such systems, entities only have local views of their immediate environments and no global view of the system exists. Decentralized autonomic systems can be implemented as multiagent systems, in which each entity is modelled as an intelligent agent. These agents can self-organize based only on local actions and interactions, so that the global behaviour of the system, required to meet its objectives, emerges from the agents’ local behaviours. This thesis addresses self-optimization in decentralized autonomic systems. Examples of techniques used to self-optimize autonomic systems include ant-colony optimization, evolutionary algorithms, neural networks, and reinforcement learning (RL). RL is considered particularly suitable for use in large-scale autonomic systems, as it does not require a predefined model of the environment. However, most applications of RL in decentralized autonomic computing address systems that optimize their behaviours towards only a single policy, while in reality management of most autonomic systems requires optimization towards multiple, often conflicting policies. These policies can be heterogeneous (i.e., implemented on different sets of agents, be active at different times and have different levels of priority), leading to the heterogeneity of the agents of which the system is composed. The cooperation required for self-optimization is particularly challenging in such heterogeneous multi-agent environments, as agents might not be aware of other agents’ policies and their relative priority for the system. Additionally, since agents operate in the same shared environment, dependencies can arise between their performance and therefore between policy implementations as well. To address self-optimization in such decentralized autonomic systems in the presence of agent
Goal Setting and Self-Efficacy During Self-Regulated Learning
This article focuses on the self-regulated learning processes of goal setting and perceived self-efficacy. Students enter learning activities with goals and self-efficacy for goal attainment. As learners work on tasks, they observe their own performances and evaluate their own goal progress. Self-efficacy and goal setting are affected by selfobservation, self-judgment, and self-reaction. When students perceive satisfactory goal progress, they feel capable of improving their skills; goal attainment, coupled with high self-efficacy, leads students to set new challenging goals. Research is reviewed on goal properties (specificity, proximity, difficulty), self-set goals, progress feedback, contracts and conferences, and conceptions of ability. Ways of teaching students to set realistic goals and evaluate progress include establishing upper and lower goal limits and employing games, contracts, and conferences. Future research might clarify the relation of goal setting and self-efficacy to transfer, goal orientations, and affective reactions. Article: Self-regulated learning occurs when students activate and sustain cognitions and behaviors systematically oriented toward attainment of learning goals. Self-regulated learning processes involve goal-directed activities that students instigate, modify, and sustain (Zimmerman, 1989). These activities include attending to instruction, processing and integrating knowledge, rehearsing information to be remembered, and developing and maintaining positive beliefs about learning capabilities and anticipated outcomes of actions (Schunk, 1989). This article focuses on two self-regulated learning processes: goal setting and perceived self-efficacy. As used in this article, a goal is what an individual is consciously trying to accomplish, goal setting involves establishing a goal and modifying it as necessary, and perceived self-efficacy refers to beliefs concerning one's capabilities to attain designated levels of performance (Bandura, 1986, 1988). I initially present a theoretical overview of self-regulated learning to include the roles of goal setting and self-efficacy. I discuss research bearing on these processes, and conclude with implications for educational practice and future research suggestions. THEORETICAL OVERVIEW Subprocesses of Self-Regulated Learning Investigators working within a social cognitive learning theory framework view self-regulation as comprising three subprocesses: self-observation, self-judgment, and self-reaction (Bandura, 1986; Kanfer & Gaelick, 1986; Schunk, 1989). A model highlighting goal setting and self-efficacy is portrayed in Figure 1. Students enter learning activities with such goals as acquiring knowledge, solving problems, and finishing workbook pages. Self-efficacy for goal attainment is influenced by abilities, prior experiences, attitudes toward learning, instruction, and the social context. As students work on tasks, they observe their performances, evaluate goal progress, and continue their work or change their task approach. Self-evaluation of goal progress as satisfactory enhances feelings of efficacy; goal attainment leads students to set new challenging goals. Self-observation. Self-observation, or deliberate attention to aspects of one's behaviors, informs and motivates. Behaviors can be assessed on such dimensions as quality, rate, quantity, and originality. The information gained is used to gauge goal progress. Self-observation also can motivate behavioral change. Many students with poor study habits are surprised to learn that they waste much study time on nonacademic activities. Sustained motivation depends on students believing that if they change their behavior they will experience better outcomes, valuing those outcomes, and feeling they can change those habits (high self-efficacy). Self-observation is aided with self-recording, where behavior instances are recorded along with such features as time, place, and duration of occurrence (Mace, Belfiore, & Shea, 1989). Without recording, observations may not faithfully reflect behaviors due to selective memory. Behaviors should be observed close in time to their occurrence and on a continuous basis rather than intermittently. Self-judgment. Self-judgment involves comparing present performance with one's goal. Self-judgments are affected by the type of standards employed, goal properties (discussed in next section), importance of goal attainment, and performance attributions. Learning goals may reflect absolute or normative standards (Bandura, 1986). Absolute standards are fixed (e.g., complete six workbook pages in 30 min). Grading systems often are based on absolute standards (A = 90-100, B = 80-89). Normative standards employ performances by others. Social comparison of one's performances with those of peers helps one determine behavioral appropriateness. Standards are informative; comparing one's performance with standards informs one of goal progress. Standards also can motivate when they show that goal progress is being made. Self-judgments can be affected by the importance of goal attainment. When individuals care little about how they perform, they may not assess their performance or expend effort to improve (Bandura, 1986). Judgments of goal progress are more likely to be made for goals one personally values. Attributions, or perceived causes of outcomes (successes, failures), influence achievement beliefs and behaviors (Weiner, 1985). Achievement outcomes often are attributed to such causes as ability, effort, task difficulty, and luck (Frieze, 1980; Weiner, 1979). Children view effort as the prime cause of outcomes (Nicholls, 1984). With development, ability attributions become increasingly important. Whether goal progress is judged as acceptable depends on its attribution. Students who attribute successes to teacher assistance may hold low self-efficacy for good performance if they believe they cannot succeed on their own. If they believe they lack ability, they may judge learning progress as deficient and be unmotivated to work harder. Self-reaction. Self-reactions to goal progress motivate behavior (Bandura, 1986). The belief that one's progress is acceptable, along with anticipated satisfaction of goal accomplishment, enhances self-efficacy and motivation. Negative evaluations will not decrease motivation if individuals believe they are capable of improving (Schunk, 1989). Motivation will not improve if students believe they lack the ability to succeed and increased effort will not help. Individuals routinely make such rewards as work breaks, new clothes, and nights on the town contingent on task progress or goal attainment. Anticipation of rewards enhances motivation and self-efficacy. Compensations raise efficacy when they are tied to accomplishments. If students are told that they will earn rewards based on what they accomplish, they become instilled with a sense of efficacy for learning. Self-efficacy is validated as students work at a task and note their own progress; receipt of the reward then becomes a symbol of the progress made. Goal Setting The effects of goals on behavior depend on their properties: specificity, proximity, and difficulty level (Bandura, 1988; Locke, Shaw, Saari, & Latham, 1981). Goals incorporating specific performance standards are more likely to enhance learning and activate self-evaluations than general goals (i.e., "Do your best"). Specific goals boost performance by greater specification of the amount of effort required for success and the selfsatisfaction anticipated. Specific goals promote self-efficacy because progress is easy to gauge. Proximal goals result in greater motivation than distant goals. It is easier to gauge progress toward a proximal goal, and the perception of progress raises self-efficacy. Proximal goals are especially influential with young children, who do not represent distant outcomes in thought. Goal difficulty, or the level of task proficiency required as assessed against a standard, influences the effort learners expend to attain a goal. Assuming requisite skills, individuals expend greater effort to attain difficult goals than when standards are lower. Learners initially may doubt whether they can attain difficult goals, but working toward them builds self-efficacy. Self-Efficacy Self-efficacy is hypothesized to influence choice of activities, effort expended, and persistence (Bandura, 1986). Students who hold low self-efficacy for learning may avoid tasks; those who judge themselves efficacious are more likely to participate. When facing difficulties, self-efficacious learners expend greater effort and persist longer than students who doubt their capabilities. Students acquire information about their self-efficacy in a given domain from their performances, observations of models (i.e., vicarious experiences), forms of social persuasion, and physiological indexes (e.g., heart rate, sweating). Information acquired from these sources does not influence efficacy automatically but is cognitively appraised. Efficacy appraisal is an inferential process; persons weigh and combine the contributions of personal and situational factors. In assessing self-efficacy, students take into account such factors as perceived ability, expended effort, task difficulty, teacher assistance, other situational factors, and patterns of successes and failures. The notion that personal expectations influence behavior is not unique to self-efficacy theory. Self-efficacy is conceptually similar to such other constructs as perceived competence, expectations of success, and selfconfidence. One means of distinguishing constructs involves the generality of the constructs. Some constructs (e.g., self-concept, self-esteem) are hypothesized to affect diverse areas of human functioning. Though perceptions of efficacy can generalize, they offer the best prediction of behavior within specific domains (e.g., selfefficacy for acquiring fraction
Drive current boosting of n-type tunnel FET with strained SiGe layer at source
Though silicon tunnel field effect transistor (TFET) has attracted attention for sub-60mV/decade subthreshold swing and very small OFF current (IOFF), its practical application is questionable due to low ON current (ION) and complicated fabrication process steps. In this paper, a new n-type classical-MOSFET-alike tunnel FET architecture is proposed, which offers sub-60mV/decade subthreshold swing along with a significant improvement in ION. The enhancement in ION is achieved by introducing a thin strained SiGe layer on top of the silicon source. Through 2D simulations it is observed that the device is nearly free from short channel effect (SCE) and its immunity towards drain induced barrier lowering (DIBL) increases with increasing germanium mole fraction. It is also found that the body bias does not change the drive current but after body current gets affected. An ION of 0:58mA=mm and a minimum average subthreshold swing of 13mV/decade is achieved for 100 nm channel length device with 1.2V supply voltage and 0.7 Ge mole fraction, while maintaining the IOFF in fA range. r 2008 Elsevier Ltd. All rights reserved.
Tracking Position and Orientation through Millimeter Wave Lens MIMO in 5G Systems
Millimeter wave signals and large antenna arrays are considered enabling technologies for future 5G networks. Despite their benefits for achieving high data rate communications, their potential advantages for tracking of the location of the user terminals are largely undiscovered. In this paper, we propose a novel support detection-based channel training method for frequency selective millimeter-wave (mm-wave) multiple-input-multiple-output system with lens antenna arrays. We show that accurate position and orientation estimation and tracking is possible using signals from a single transmitter with lens antenna arrays. Particularly, the beamspace channel estimation is formulated as two sparse signal recovery problems in the downlink and uplink for the estimation of angle-of-arrival, angle-of-departure, and timeof-arrival. The proposed method offers a higher sparse detection probability compared to the compressed sensing based solutions. Finally, a joint heuristic beamformer design and user position and orientation tracking approach are proposed based on initial estimation of channel parameters obtained in the training phase.
A public value perspective for ICT enabled public sector reforms: A theoretical reflection
Available online 4 August 2012
Heterogeneous integration of lithium niobate and silicon nitride waveguides for wafer-scale photonic integrated circuits on silicon.
An ideal photonic integrated circuit for nonlinear photonic applications requires high optical nonlinearities and low loss. This work demonstrates a heterogeneous platform by bonding lithium niobate (LN) thin films onto a silicon nitride (Si3N4) waveguide layer on silicon. It not only provides large second- and third-order nonlinear coefficients, but also shows low propagation loss in both the Si3N4 and the LN-Si3N4 waveguides. The tapers enable low-loss-mode transitions between these two waveguides. This platform is essential for various on-chip applications, e.g., modulators, frequency conversions, and quantum communications.
Subdermal neo-umbilicoplasty in abdominoplasty
Umbilicoplasty is an important surgical procedure in abdominoplasty, regardless of the technique used. An unaesthetic umbilicus often irreversibly affects surgical outcomes. This study describes the experience of our team with the subdermal neo-umbilicoplasty technique and assesses patient satisfaction with the appearance of the new umbilicus. Fifty-eight patients with abdominal deformity underwent abdominoplasty with subdermal neo-umbilicoplasty. Patients were followed up for at least 1 year with photographic documentation, assessment of patient satisfaction, and evaluation of eventual postoperative complications. Postoperative complications included one case of shallow umbilicus, four cases of superficial necrosis, and one case of midline deviation. No patient required surgical revision. There was a high level of patient satisfaction with the natural-looking umbilicus. Subdermal neo-umbilicoplasty resulted in low postoperative complications and provided a new, natural-looking umbilicus without external scars. Level of evidence: Level IV, therapeutic study
Sentiment Analysis of Political Tweets: Towards an Accurate Classifier
We perform a series of 3-class sentiment classification experiments on a set of 2,624 tweets produced during the run-up to the Irish General Elections in February 2011. Even though tweets that have been labelled as sarcastic have been omitted from this set, it still represents a difficult test set and the highest accuracy we achieve is 61.6% using supervised learning and a feature set consisting of subjectivity-lexicon-based scores, Twitterspecific features and the top 1,000 most discriminative words. This is superior to various naive unsupervised approaches which use subjectivity lexicons to compute an overall sentiment score for a <tweet,political party> pair.
Design of Circular/Dual-Frequency Linear Polarization Antennas Based on the Anisotropic Complementary Split Ring Resonator
A novel kind of circular polarization and dual-frequency linear polarization microstrip antennas is proposed. The designs are based on the anisotropic property of the complementary split ring resonator. When the complementary split ring resonator is etched on the patch of the probe-fed microstrip antenna, the gap orientation asymmetric or symmetric to the current propagating direction will render the antenna to radiate circular polarization waves or dual-frequency linear polarization waves. Details of the experimental results compared with the simulated results are presented and discussed.
Transforming GIS Data into Functional Road Models for Large-Scale Traffic Simulation
There exists a vast amount of geographic information system (GIS) data that model road networks around the world as polylines with attributes. In this form, the data are insufficient for applications such as simulation and 3D visualization-tools which will grow in power and demand as sensor data become more pervasive and as governments try to optimize their existing physical infrastructure. In this paper, we propose an efficient method for enhancing a road map from a GIS database to create a geometrically and topologically consistent 3D model to be used in real-time traffic simulation, interactive visualization of virtual worlds, and autonomous vehicle navigation. The resulting representation provides important road features for traffic simulations, including ramps, highways, overpasses, legal merge zones, and intersections with arbitrary states, and it is independent of the simulation methodologies. We test the 3D models of road networks generated by our algorithm on real-time traffic simulation using both macroscopic and microscopic techniques.
Multispectral brain morphometry in Tourette syndrome persisting into adulthood
Tourette syndrome is a childhood-onset neuropsychiatric disorder with a high prevalence of attention deficit hyperactivity and obsessive-compulsive disorder co-morbidities. Structural changes have been found in frontal cortex and striatum in children and adolescents. A limited number of morphometric studies in Tourette syndrome persisting into adulthood suggest ongoing structural alterations affecting frontostriatal circuits. Using cortical thickness estimation and voxel-based analysis of T1- and diffusion-weighted structural magnetic resonance images, we examined 40 adults with Tourette syndrome in comparison with 40 age- and gender-matched healthy controls. Patients with Tourette syndrome showed relative grey matter volume reduction in orbitofrontal, anterior cingulate and ventrolateral prefrontal cortices bilaterally. Cortical thinning extended into the limbic mesial temporal lobe. The grey matter changes were modulated additionally by the presence of co-morbidities and symptom severity. Prefrontal cortical thickness reduction correlated negatively with tic severity, while volume increase in primary somatosensory cortex depended on the intensity of premonitory sensations. Orbitofrontal cortex volume changes were further associated with abnormal water diffusivity within grey matter. White matter analysis revealed changes in fibre coherence in patients with Tourette syndrome within anterior parts of the corpus callosum. The severity of motor tics and premonitory urges had an impact on the integrity of tracts corresponding to cortico-cortical and cortico-subcortical connections. Our results provide empirical support for a patho-aetiological model of Tourette syndrome based on developmental abnormalities, with perturbation of compensatory systems marking persistence of symptoms into adulthood. We interpret the symptom severity related grey matter volume increase in distinct functional brain areas as evidence of ongoing structural plasticity. The convergence of evidence from volume and water diffusivity imaging strengthens the validity of our findings and attests to the value of a novel multimodal combination of volume and cortical thickness estimations that provides unique and complementary information by exploiting their differential sensitivity to structural change.
Hinging Hyperplanes for Time-Series Segmentation
Division of a time series into segments is a common technique for time-series processing, and is known as segmentation. Segmentation is traditionally done by linear interpolation in order to guarantee the continuity of the reconstructed time series. The interpolation-based segmentation methods may perform poorly for data with a level of noise because interpolation is noise sensitive. To handle the problem, this paper establishes an explicit expression for segmentation from a compact representation for piecewise linear functions using hinging hyperplanes. This expression enables the use of regression to obtain a continuous reconstructed signal and, as a consequence, application of advanced techniques in segmentation. In this paper, a least squares support vector machine with lasso using a hinging feature map is given and analyzed, based on which a segmentation algorithm and its online version are established. Numerical experiments conducted on synthetic and real-world datasets demonstrate the advantages of our methods compared to existing segmentation algorithms.
Detecting and Tracking The Real-time Hot Topics: A Study on Computational Neuroscience
In this study, following the idea of our previous paper (Wang, et al., 2013a), we improve the method to detect and track hot topics in a specific field by using the real-time article usage data. With the “usage count” data provided by Web of Science, we take the field of computational neuroscience as an example to make analysis. About 10 thousand articles in the field of Computational Neuroscience are queried in Web of Science, when the records, including the usage count data of each paper, have been harvested and updated weekly from October 19, 2015 to March 21, 2016. The hot topics are defined by the most frequently used keywords aggregated from the articles. The analysis reveals that hot topics in Computational Neuroscience are related to the key technologies, like “fmri”, “eeg”, “erp”, etc. Furthermore, using the weekly updated data, we track the dynamical changes of the topics. The characteristic of immediacy of usage data makes it possible to track the “heat” of hot topics timely and dynamically.
Preparation of Plasma Sprayed Coatings of Yttria Stabilized Zirconia and Strontium Zirconate and Studies on Their Interaction with Graphite Substrate
Plasma sprayed coatings are extensively used for high temperature chemical barrier applications. Thermal stability and interaction of the coating material with substrate materials are critical issues that decide coating performance. This paper reports preliminary results of high temperature interaction of plasma sprayed yttria stabilized zirconia and strontium zirconate coatings with graphite substrate.
Design of Compact F-Shaped Slot Triple-Band Antenna for WLAN/WiMAX Applications
This communication presents a small, low-profile planar triple-band microstrip antenna for WLAN/WiMAX applications. The goal of this communication is to combine WLAN and WiMAX communication standards simultaneously into a single device by designing a single antenna that can excite triple-band operation. The designed antenna has a compact size of $19 \times 25\;\text{mm}^{2}$ ($0.152 \lambda_{0}\;\times 0.2 \lambda_{0}$). The proposed antenna consists of F-shaped slot radiators and a defected ground plane. Since only two F-shaped slots are etched on either sides of the radiator for triple-band operation, the radiator is very compact in size and simple in structure. The antenna shows three distinct bands I from 2.0 to 2.76, II from 3.04 to 4.0, and III from 5.2 to 6.0 GHz, which covers entire WLAN (2.4/5.2/5.8 GHz) and WiMAX (2.5/3.5/5.5) bands. To validate the proposed design, an experimental prototype has been fabricated and tested. Thus, the simulation results along with the measurements show that the antenna can simultaneously operate over WLAN (2.4/5.2/5.8 GHz) and WiMAX (2.5/3.5/5.5 GHz) frequency bands.
FPGA Design and Implementation of a Real-Time Stereo Vision System
Stereo vision is a well-known ranging method because it resembles the basic mechanism of the human eye. However, the computational complexity and large amount of data access make real-time processing of stereo vision challenging because of the inherent instruction cycle delay within conventional computers. In order to solve this problem, the past 20 years of research have focused on the use of dedicated hardware architecture for stereo vision. This paper proposes a fully pipelined stereo vision system providing a dense disparity image with additional sub-pixel accuracy in real-time. The entire stereo vision process, such as rectification, stereo matching, and post-processing, is realized using a single field programmable gate array (FPGA) without the necessity of any external devices. The hardware implementation is more than 230 times faster when compared to a software program operating on a conventional computer, and shows stronger performance over previous hardware-related studies.
The Representation and Processing of Coreference in Discourse
A model is presented that addresses both the distribution and comprehension of different forms of referring expressions in language. This model is expressed in a formalism (Kamp & Reyle, 1993) that uses interpretive rules to map syntactic representations onto representations of discourse. Basic interpretive rules are developed for names, pronouns, definite descriptions, and quantified descriptions. These rules are triggered by syntactic input and interact dynamically with representations of discourse to establish reference and coreference. This interaction determines the ease with which coreference can be established for different linguistic forms given the existing discourse context. The performance of the model approximates that observed in studies of intuitive judgments of grammaticality and studies using online measures of language comprehension. The model uses the same basic interpretive mechanisms for coreference within and between sentences, thereby linking the domain traditionally studied by generative linguists to domains that have been of concern primarily to psychologists and computational linguists.
New Rogowski coil design with a high DV/DT immunity and high bandwidth
With the fast development of modern power semiconductors in the last years, the development of current measurement technologies has to adapt to this evolution. The challenge for the power electronic engineer is to provide a current sensor with a high bandwidth and a high immunity against external interferences. Rogowski current transducers are popular for monitoring transient currents in power electronic applications without interferences caused by external magnetic fields. But the trend of even higher current and voltage gradients generates a dilemma regarding the Rogowski current transducer technology. On the one hand, a high current gradient requires a current sensor with a high bandwidth. On the other hand, high voltage gradients forces to use a shielding around the Rogowski coil in order to protect the measurement signal from a capacitive displacement current caused by an unavoidable capacitive coupling to the setup, which reduces the bandwidth substantially. This paper presents a new Rogowski coil design which allows to measure high current gradients close to high voltage gradients without interferences and without reducing the bandwidth by a shielding. With this new measurement technique, it is possible to solve the mentioned dilemma and to get ready to measure the current of modern power semiconductors such as SiC and GaN with a Rogowski current transducer.
Sexual differentiation of the human brain: relevance for gender identity, transsexualism and sexual orientation.
Male sexual differentiation of the brain and behavior are thought, on the basis of experiments in rodents, to be caused by androgens, following conversion to estrogens. However, observations in human subjects with genetic and other disorders show that direct effects of testosterone on the developing fetal brain are of major importance for the development of male gender identity and male heterosexual orientation. Solid evidence for the importance of postnatal social factors is lacking. In the human brain, structural diferences have been described that seem to be related to gender identity and sexual orientation.
Florence and Rosamond Davenport Hill and the Development of Boarding Out in England and Australia: a study in cultural transmission
The adoption of boarding out by state children's departments across Australia is often attributed to the influence of English social reformers Florence and Rosamond Davenport Hill, whose visit to the colonies in the early 1870s coincided with a period of growing dis-ease with existing provisions for neglected children. However, after their return to Britain, they used their experience in the colonies to castigate English authorities for being too slow to adopt a similar course. This article complicates existing theories of cultural transmission in relation to ideas about child welfare. It analyses the ways in which the Davenport Hill sisters laid claim to their expert speaking position, and argues for the importance of informal networks in the development of child welfare policy in the years before the rise of transnational children's rights organisations.
Learning about the letter name subset of the vocabulary: Evidence from US and Brazilian preschoolers
To examine the factors that affect the learning of letter names, an important foundation for literacy, we asked 318 US and 369 Brazilian preschoolers to identify each uppercase letter. Similarity of letter shape was the major determinant of confusion errors in both countries, and children were especially likely to interchange letters that were similar in shape as well as name. Errors were also affected by letter frequency, both general frequency and occurrence of letters in children’s own names. Differences in letter names and letter frequencies between English and Portuguese led to certain differences in the patterns of performance for children in the two countries. Other differences appeared to reflect US children’s greater familiarity with the conventional order of the alphabet. Boys were overrepresented at the low end of the continuum of letter name knowledge, suggesting that some boys begin formal reading instruction lacking important foundational skills. A child’s ability to identify the letters of the alphabet by name is one of the best predictors of how readily he or she will learn to read. Kindergarten letter identification accounts for nearly one-third of the variance in reading ability in Grades 1 to 3, and it is almost as successful at predicting later reading skill as an entire reading readiness test (Snow, Burns, & Griffin, 1998). Knowledge of letter names aids would-be readers and spellers in several ways (see Foulin, 2005). It helps them make some sense of printed words such as jail, where the entire name of one or more of the letters is heard in the spoken word. In addition, letter name knowledge helps children learn about the sound-symbolizing function of letters, because the phoneme that a letter represents is usually heard in the letter’s name. Effects of letter name knowledge on reading, spelling, and letter sound knowledge have been documented in languages as diverse as English (McBride-Chang, 1999; Treiman, Tincoff, Rodriguez, Mouzaki, & Francis, 1998), Portuguese (Abreu & Cardoso-Martins, 1998), and Hebrew (Levin, Patel, Margalit, & Barad, 2002). Given the foundational role of alphabet knowledge in literacy development, it is important to understand the processes involved in letter name learning. In the present study, we explore the hypothesis that these are the same processes © 2006 Cambridge University Press 0142-7164/06 $12.00 Applied Psycholinguistics 27:2 212 Treiman et al.: Letter name learning underlying the acquisition of spoken vocabulary in general. This hypothesis is motivated by the fact that children in the United States and other literate societies begin to learn the names of letters as early as 2 or 3 years of age, at the same time they are learning the names of many other things. For a young child, learning to label the shape D with the syllable /di/ may be quite similar to learning to label the shape with the label /stAr/.1 It may be several years before the child realizes that D symbolizes a linguistic unit, a phoneme, and in this respect is different from . Letter name learning may thus form a bridge between the acquisition of a spoken vocabulary and the acquisition of literacy. If this hypothesis is correct, it would suggest that vocabulary development and literacy development are linked to one another in a way that has not previously been envisioned in the research literature. To examine children’s learning of letter names, and to determine whether it is affected by the same variables that influence vocabulary learning, we asked preschoolers in the United States and Brazil to identify each letter of the alphabet by name. We examined their correct responses and the nature of their errors. Uppercase letters were used because these are typically learned before lowercase letters (Worden & Boettcher, 1990). We tested a large number of children (318 in the United States, 369 in Brazil) to ensure sufficient power to detect effects. The crosslanguage comparison is useful because, although both English and Portuguese use the letters of the Latin alphabet, the languages differ in some important ways that were expected to be relevant to letter name learning. Table 1 shows one difference: the names that are given to the letters. The US and Brazilian names are similar in some cases but not others. For example, the name of Q rhymes with that of B in Portuguese but not in English. The languages also differ in the relative frequencies of the letters. For example, T is more common than A in English words, but the reverse is true in Portuguese. If children who speak different languages perform differently on the same letters, then the factors by which those letters differ in those languages are promising candidates for explaining children’s performance. Researchers have documented some differences across languages and writing systems in the development of reading skills (e.g., Seymour, Aro, & Erskine, 2003), but we know of no direct cross-language comparisons of letter name learning. Our hypothesis that children learn the names of letters in much the same way that they learn the nouns that label other concrete objects suggests that we should look to the literature on vocabulary development for suggestions about the factors that may be involved in letter name learning. This literature shows that shape plays an important role in word learning. Objects that are similar in shape often belong to the same category and are called the same name. Indeed, many of the count nouns in young children’s vocabularies refer to classes of objects that are similar in shape (e.g., Gershkoff-Stowe & Smith, 2004). Children’s reliance on shape is revealed in their generalizations about the names of objects. For example, one toddler said moon when playing with a half-moon shaped lemon slice and when touching a circular chrome dial on a dishwasher, having previously used this word when looking at the moon (Bowerman, 1978). Similar sorts of extensions occur with lowercase letters. For example, children may use the name of b for the similarly shaped d or the name of p for q (e.g., Courrieu & De Falco, 1989; Popp, 1964). Treiman and Kessler (2003), in regression analyses carried out over the 26 letters of the English alphabet, found that children made more naming errors on Applied Psycholinguistics 27:2 213 Treiman et al.: Letter name learning Table 1. Letter names in Brazilian Portuguese and
Eating disorders in athletes: overview of prevalence, risk factors and recommendations for prevention and treatment.
The prevalence of disordered eating and eating disorders vary from 0-19% in male athletes and 6-45% in female athletes. The objective of this paper is to present an overview of eating disorders in adolescent and adult athletes including: (1) prevalence data; (2) suggested sport- and gender-specific risk factors and (3) importance of early detection, management and prevention of eating disorders. Additionally, this paper presents suggestions for future research which includes: (1) the need for knowledge regarding possible gender-specific risk factors and sport- and gender-specific prevention programmes for eating disorders in sports; (2) suggestions for long-term follow-up for female and male athletes with eating disorders and (3) exploration of a possible male athlete triad.
Multi-task Domain Adaptation for Sequence Tagging
Many domain adaptation approaches rely on learning cross domain shared representations to transfer the knowledge learned in one domain to other domains. Traditional domain adaptation only considers adapting for one task. In this paper, we explore multi-task representation learning under the domain adaptation scenario. We propose a neural network framework that supports domain adaptation for multiple tasks simultaneously, and learns shared representations that better generalize for domain adaptation. We apply the proposed framework to domain adaptation for sequence tagging problems considering two tasks: Chinese word segmentation and named entity recognition. Experiments show that multi-task domain adaptation works better than disjoint domain adaptation for each task, and achieves the state-of-the-art results for both tasks in the social media domain.
Blinded 12-week comparison of once-daily indacaterol and tiotropium in COPD.
Two, once daily (q.d.) inhaled bronchodilators are available for the treatment of chronic obstructive pulmonary disease (COPD): the β(2)-agonist indacaterol and the anticholinergic tiotropium. This blinded study compared the efficacy of these two agents and assessed their safety and tolerability. Patients with moderate-to-severe COPD were randomised to treatment with indacaterol 150 μg q.d. (n=797) or tiotropium 18 μg q.d. (n=801) for 12 weeks. After 12 weeks, the two treatments had similar overall effects on "trough" (24 h post-dose) forced expiratory volume in 1 s. Indacaterol-treated patients had greater improvements in transition dyspnoea index (TDI) total score (least squares means 2.01 versus 1.43; p<0.001) and St George's Respiratory Questionnaire (SGRQ) total score (least squares means 37.1 versus 39.2; p<0.001; raw mean change from baseline -5.1 versus -3.0), and were significantly more likely to achieve clinically relevant improvements in these end-points (indacaterol versus tiotropium odds ratios of 1.49 for TDI and 1.43 for SGRQ, both p<0.001). Adverse events were recorded for 39.7% and 37.2% of patients in the indacaterol and tiotropium treatment groups, respectively. The most frequent adverse events were COPD worsening, cough and nasopharyngitis. Both bronchodilators demonstrated spirometric efficacy. The two treatments were well tolerated with similar adverse event profiles. Compared with tiotropium, indacaterol provided significantly greater improvements in clinical outcomes.
Interactive Reference Region Based Multi-Objective Evolutionary Algorithm Through Decomposition
Many evolutionary multi-objective optimization (EMOs) methodologies have been proposed and shown a great potential in approximating the entire Pareto front. While in real world, what decision makers (DMs) want is one or several solutions to satisfy their requirements. It has become a hot problem that dynamically using preference information provided by DMs during the optimization process guides the search of EMO algorithms. An interactive reference region-based evolutionary algorithm through decomposition is proposed, denoted as RR-MOEA/D in this paper, which focuses the search on the desire of DMs to save computational resources. MOEA/D, as a well-known multi-objective optimization algorithm, is used as a basic framework here. In MOEA/D, by dealing with the sub-problems in the preference region and ignoring uninterested ones, the solutions obtained can converge to the regions which the DM prefers on the Pareto front and the computational complexity can be saved to a great extent. At each interaction, a humanized and simple interactive condition is adopted so that the reference region can be changed in a very intuitive way if the DM is unsatisfied the results in the interactive process. A rapid interaction is designed and a set of rough solutions can be obtained quickly whenever the preference information is changed. The proposed algorithm is tested on several benchmark problems and the experimental results show that the proposed algorithm can take full use of preference information and successfully converge to the reference region due to its reasonable and simple interaction mechanism.
Modelling Experimental Game Design
This paper uses two models of design, Stolterman’s and Löwgren’s three abstraction levels and Lawson’s model of designing, from the general design research to describe the game design process of an experimental pervasive mobile phone game. The game was designed to be deployed at a big science fiction convention for two days and was part of a research through design project where the focus was to understand which core mechanics could work for pervasive mobile phone games. The design process was, as is usual for experimental designs, very iterative. Data were gathered during the design process as entries in a design diary, notes from playtesting and bodystorming sessions, user interface sketches, and a series of software prototypes. The two complementary models of design were used to analyse the design process and the result is that the models give a good overview to an experimental game design process and reveal activities, design situations, and design choices which could have otherwise been lost in the analysis.
A case-control study of anatomic changes resulting from sexual abuse.
OBJECTIVE Our goal was to identify vulvar and hymenal characteristics associated with sexual abuse among female children between the ages of 3 and 8 years. STUDY DESIGN Using a case-control study design, we examined and photographed the external genitalia of 192 prepubertal children with a history of penetration and 200 children who denied prior abuse. Bivariate analyses were conducted by chi(2), the Fisher exact test, and the Student t test to assess differences in vulvar and hymenal features between groups. RESULTS Vaginal discharge was observed more frequently in abused children (P =.01). No difference was noted in the percentage of abused versus nonabused children with labial agglutination, increased vascularity, linea vestibularis, friability, a perineal depression, or a hymenal bump, tag, longitudinal intravaginal ridge, external ridge, band, or superficial notch. Furthermore, the mean number of each of these features per child did not differ between groups. A hymenal transection, perforation, or deep notch was observed in 4 children, all of whom were abused. CONCLUSION The genital examination of the abused child rarely differs from that of the nonabused child. Thus legal experts should focus on the child's history as the primary evidence of abuse.
Food packaging history and innovations.
Food packaging has evolved from simply a container to hold food to something today that can play an active role in food quality. Many packages are still simply containers, but they have properties that have been developed to protect the food. These include barriers to oxygen, moisture, and flavors. Active packaging, or that which plays an active role in food quality, includes some microwave packaging as well as packaging that has absorbers built in to remove oxygen from the atmosphere surrounding the product or to provide antimicrobials to the surface of the food. Packaging has allowed access to many foods year-round that otherwise could not be preserved. It is interesting to note that some packages have actually allowed the creation of new categories in the supermarket. Examples include microwave popcorn and fresh-cut produce, which owe their existence to the unique packaging that has been developed.
Effects of intravenous propranolol and metoprolol and their interaction with isoprenaline on pulmonary function, heart rate and blood pressure in asthmatics
The effects of propranolol (0.06 mg/kg i.v.), the selectiveβ 1-receptor antagonist metoprolol (0.12 mg/kg i.v.) and a placebo on pulmonary function, heart rate and blood pressure have been compared in asthmatics. The interaction of these drugs with increasing doses of isoprenaline on the same variables was also studied. The twoβ-blockers reduced resting heart rate to the same extent, indicating the same degree of blockade of cardiacβ-receptors. Bothβ-blockers reduced the basal forced expiratory volume in one second (FEV1), and the effect tended to be more pronounced after propranolol. Isoprenaline caused a dose-dependent increase in FEV1 and vital capacity (VC). These effects were almost completely blocked by propranolol, whereas after metoprolol the changes approached that of the placebo. The isoprenaline-induced increase in heart rate and fall in diastolic blood pressure was also inhibited to a considerably greater extent by propranolol than by metoprolol. The results show a selectivity of metoprolol for so-calledβ 1-receptors and indicate that metoprolol may be used in asthmatics provided that it is combined withβ 2-receptor-stimulating drugs.
Biodegradation of lignocellulosics: microbial, chemical, and enzymatic aspects of the fungal attack of lignin.
Wood is the main renewable material on Earth and is largely used as building material and in paper-pulp manufacturing. This review describes the composition of lignocellulosic materials, the different processes by which fungi are able to alter wood, including decay patterns caused by white, brown, and soft-rot fungi, and fungal staining of wood. The chemical, enzymatic, and molecular aspects of the fungal attack of lignin, which represents the key step in wood decay, are also discussed. Modern analytical techniques to investigate fungal degradation and modification of the lignin polymer are reviewed, as are the different oxidative enzymes (oxidoreductases) involved in lignin degradation. These include laccases, high redox potential ligninolytic peroxidases (lignin peroxidase, manganese peroxidase, and versatile peroxidase), and oxidases. Special emphasis is given to the reactions catalyzed, their synergistic action on lignin, and the structural bases for their unique catalytic properties. Broadening our knowledge of lignocellulose biodegradation processes should contribute to better control of wood-decaying fungi, as well as to the development of new biocatalysts of industrial interest based on these organisms and their enzymes.
Motor Schema-Based Mobile Robot Navigation
Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.
Development of a polymer-based tendon-driven wearable robotic hand
This paper presents the development of a polymer-based tendon-driven wearable robotic hand, Exo-Glove Poly. Unlike the previously developed Exo-Glove, a fabric-based tendon-driven wearable robotic hand, Exo-Glove Poly was developed using silicone to allow for sanitization between users in multiple-user environments such as hospitals. Exo-Glove Poly was developed to use two motors, one for the thumb and the other for the index/middle finger, and an under-actuation mechanism to grasp various objects. In order to realize Exo-Glove Poly, design features and fabrication processes were developed to permit adjustment to different hand sizes, to protect users from injury, to enable ventilation, and to embed Teflon tubes for the wire paths. The mechanical properties of Exo-Glove Poly were verified with a healthy subject through a wrap grasp experiment using a mat-type pressure sensor and an under-actuation performance experiment with a specialized test set-up. Finally, performance of the Exo-Glove Poly for grasping various shapes of object was verified, including objects needing under-actuation.