title
stringlengths
8
300
abstract
stringlengths
0
10k
A simple amplitude detector-based demodulator for resolver converters
This paper presents a simple method based on sinusoidal-amplitude detector for realizing the resolver-signal demodulator. The proposed demodulator consists of two full-wave rectifiers, two ±unity-gain amplifiers, and two sinusoidal-amplitude detectors with control switches. Two output voltages are proportional to sine and cosine envelopes of resolver-shaft angle without low-pass filter. Experimental results demonstrating characteristic of the proposed circuit are included.
Model-Free Head Pose Estimation Based on Shape Factorisation and Particle Filtering
Head pose estimation is essential for several applications and is particularly required for head pose-free eye-gaze tracking where estimation of head rotation permits free head movement during tracking. While the literature is broad, the accuracy of recent vision-based head pose estimation methods is contingent upon the availability of training data or accurate initialisation and tracking of specific facial landmarks. In this paper, we propose a method to estimate the head pose in realtime from the trajectories of a set of feature points spread randomly over the face region, without requiring a training phase or model-fitting of specific facial features. Conversely, without seeking specific facial landmarks, our method exploits the sparse 3-dimensional shape of the surface of interest, recovered via shape and motion factorisation, in combination with particle filtering to correct mistracked feature points and improve upon an initial estimation of the 3-dimensional shape during tracking. In comparison with two additional methods, quantitative results obtained through our modeland landmark-free method yield a reduction in the head pose estimation error for a wide range of head rotation angles.
Estimating Ad Effectiveness using Geo Experiments in a Time-Based Regression Framework
Two previously published papers (Vaver and Koehler, 2011, 2012) describe a model for analyzing geo experiments. This model was designed to measure advertising effectiveness using the rigor of a randomized experiment with replication across geographic units providing confidence interval estimates. While effective, this geo-based regression (GBR) approach is less applicable, or not applicable at all, for situations in which few geographic units are available for testing (e.g. smaller countries, or subregions of larger countries) These situations also include the socalled matched market tests, which may compare the behavior of users in a single control region with the behavior of users in a single test region. To fill this gap, we have developed an analogous time-based regression (TBR) approach for analyzing geo experiments. This methodology predicts the time series of the counterfactual market response, allowing for direct estimation of the cumulative causal effect at the end of the experiment. In this paper we describe this model and evaluate its performance using simulation.
A double-blind randomized study comparing imipramine with fluvoxamine in depressed inpatients
To compare the efficacy of imipramine and fluvoxamine in inpatients from two centers suffering from a depressive disorder according to DSM IV criteria. The study included 141 patients with a depressive disorder according to DSM IV criteria. After a drug-free and placebo run-in period of 1 week, patients were randomized to imipramine or fluvoxamine; doses of both drugs were adjusted to a predefined target blood level. Efficacy was evaluated 4 weeks after attaining predefined adequate plasma level. The mean age of the study group (47 males, 94 females) was 51.8 (range 19–65) years. Of these 141 patients, 56 had episode duration longer than 1 year, 48 had mood congruent psychotic features, and 138 patients received medication. Seven patients did not complete the medication trial. The total number of patients using concurrent medication was 12/138 (8.6%). On the primary outcome criteria patients on imipramine improved significantly better on the change of illness severity score of the CGI (χ2 exact trend test=4.089, df=1, P=0.048). There was no significant difference in 50% or more reduction on the HRSD, the other primary outcome criterion. On the secondary outcome criteria the mean reduction of the HRSD scores was significantly larger in the imipramine group than in the fluvoxamine group (mean difference=3.1, standard error (SE)=1.4, t=2.15, df=136, P=0.033). There was no significant difference in the number of patients with an HRSD ≤7 at the final evaluation. In depressed inpatients imipramine is more efficacious than fluvoxamine. Both drugs were well tolerated by all patients.
TimeNet: Pre-trained deep recurrent neural network for time series classification
In the spirit of the tremendous success of deep Convolutional Neural Networks as generic feature extractors from images, we propose Timenet : a multilayered recurrent neural network (RNN) trained in an unsupervised manner to extract features from time series. Fixed-dimensional vector representations or embeddings of variable-length sentences have been shown to be useful for a variety of document classification tasks. Timenet is the encoder network of an auto-encoder based on sequence-to-sequence models that transforms varying length time series to fixed-dimensional vector representations. Once Timenet is trained on diverse sets of time series, it can then be used as a generic off-the-shelf feature extractor for time series. We train Timenet on time series from 24 datasets belonging to various domains from the UCR Time Series Classification Archive, and then evaluate embeddings from Timenet for classification on 30 other datasets not used for training the Timenet. We observe that a classifier learnt over the embeddings obtained from a pre-trained Timenet yields significantly better performance compared to (i) a classifier learnt over the embeddings obtained from the encoder network of a domain-specific auto-encoder, as well as (ii) a nearest neighbor classifier based on the well-known and effective Dynamic Time Warping (DTW) distance measure. We also observe that a classifier trained on embeddings from Timenet give competitive results in comparison to a DTW-based classifier even when using significantly smaller set of labeled training data, providing further evidence that Timenet embeddings are robust. Finally, t-SNE visualizations of Timenet embeddings show that time series from different classes form well-separated clusters.
Analyzing software requirements errors in safety-critical, embedded systems
This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identi ed as potentially hazardous to the system tend to be produced by di erent error mechanisms than non-safetyrelated software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements speci cations and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.
Improving the detection and prediction of suicidal behavior among military personnel by measuring suicidal beliefs: an evaluation of the Suicide Cognitions Scale.
BACKGROUND Newer approaches for understanding suicidal behavior suggest the assessment of suicide-specific beliefs and cognitions may improve the detection and prediction of suicidal thoughts and behaviors. The Suicide Cognitions Scale (SCS) was developed to measure suicide-specific beliefs, but it has not been tested in a military setting. METHODS Data were analyzed from two separate studies conducted at three military mental health clinics (one U.S. Army, two U.S. Air Force). Participants included 175 active duty Army personnel with acute suicidal ideation and/or a recent suicide attempt referred for a treatment study (Sample 1) and 151 active duty Air Force personnel receiving routine outpatient mental health care (Sample 2). In both samples, participants completed self-report measures and clinician-administered interviews. Follow-up suicide attempts were assessed via clinician-administered interview for Sample 1. Statistical analyses included confirmatory factor analysis, between-group comparisons by history of suicidality, and generalized regression modeling. RESULTS Two latent factors were confirmed for the SCS: Unloveability and Unbearability. Each demonstrated good internal consistency, convergent validity, and divergent validity. Both scales significantly predicted current suicidal ideation (βs >0.316, ps <0.002) and significantly differentiated suicide attempts from nonsuicidal self-injury and control groups (F(6, 286)=9.801, p<0.001). Both scales significantly predicted future suicide attempts (AORs>1.07, ps <0.050) better than other risk factors. LIMITATIONS Self-report methodology, small sample sizes, predominantly male samples. CONCLUSIONS The SCS is a reliable and valid measure that predicts suicidal ideation and suicide attempts among military personnel better than other well-established risk factors.
Electromigration and its impact on physical design in future technologies
Electromigration (EM) is one of the key concerns going forward for interconnect reliability in integrated circuit (IC) design. Although analog designers have been aware of the EM problem for some time, digital circuits are also being affected now. This talk addresses basic design issues and their effects on electromigration during interconnect physical design. The intention is to increase current density limits in the interconnect by adopting electromigration-inhibiting measures, such as short-length and reservoir effects. Exploitation of these effects at the layout stage can provide partial relief of EM concerns in IC design flows in future.
Semi-Markov model for simulating MOOC students
Large-scale experiments are often expensive and time consuming. Although Massive Online Open Courses (MOOCs) provide a solid and consistent framework for learning analytics, MOOC practitioners are still reluctant to risk resources in experiments. In this study, we suggest a methodology for simulating MOOC students, which allow estimation of distributions, before implementing a large-scale experiment. To this end, we employ generative models to draw independent samples of artificial students in Monte Carlo simulations. We use SemiMarkov Chains for modeling student’s activities and ExpectationMaximization algorithm for fitting the model. From the fitted model, we generate simulated students whose processes of weekly activities are similar to these of the real students.
Problems of Dialect Non-Inclusion in Tshivenḓa Bilingual Dictionary Entries
Tshivenḓa is characterised by a number of dialects which exhibit some linguistic features different from those of other groups. The standard dialect in Tshivenḓa is Tshiphani. This dialect is spoken in the areas of Tshivhase and Mphaphuli. The selection of the Tshiphani as a standard dialect in Tshivenḓa did not cause the other dialects to die out as they are still used by the Vhavenḓa as spoken language. However, there is non-inclusion of dialectal entries in some dictionaries, whereas in others, very few dialectal entries have been included. A lexicographer must always take into consideration that there is a variation in language. Lexicographers should not see the inclusion of non-standard dialects as corrupting the standard language. Tshiphani was superimposed on other dialects. A dictionary is expected to accommodate all dialects of a language because they have equal value in spoken language. It is important for a lexicographer to first carry out research regarding the existence of dialects in a language if one intends to compile a dictionary. This paper seeks to show that it is necessary to include lexicons from non-standard dialects in lexicography works such as bilingual dictionaries because there is no dialect which is better than others. The addition of non-standard dialects in dictionaries will enrich the languages.
Scapular dyskinesis and its relation to shoulder pain.
Scapular dyskinesis is an alteration in the normal position or motion of the scapula during coupled scapulohumeral movements. It occurs in a large number of injuries involving the shoulder joint and often is caused by injuries that result in the inhibition or disorganization of activation patterns in scapular stabilizing muscles. It may increase the functional deficit associated with shoulder injury by altering the normal scapular role during coupled scapulohumeral motions. Scapular dyskinesis appears to be a nonspecific response to shoulder dysfunction because no specific pattern of dyskinesis is associated with a specific shoulder diagnosis. It should be suspected in patients with shoulder injury and can be identified and classified by specific physical examination. Treatment of scapular dyskinesis is directed at managing underlying causes and restoring normal scapular muscle activation patterns by kinetic chain-based rehabilitation protocols.
A review on abnormal crowd behavior detection
Crowd analysis becomes very popular research topic in the area of computer vision. A growing requirement for smarter video surveillance of private and public space using intelligent vision systems which can differentiate what is semantically important in the direction of the human observer as normal behaviors and abnormal behaviors. People counting, people tracking and crowd behavior analysis are different stages for computer based crowd analysis algorithm. This paper focus on crowd behavior analysis which can detect normal behavior or abnormal behavior.
Linking Loneliness, Shyness, Smartphone Addiction Symptoms, and Patterns of Smartphone Use to Social Capital
The purpose of this study is to explore the roles of psychological attributes (such as shyness and loneliness) and smartphone usage patterns in predicting smartphone addiction symptoms and social capital. Data were gathered from a sample of 414 university students using online survey in Mainland China. Results from exploratory factor analysis identified five smartphone addiction symptoms: disregard of harmful consequences, preoccupation, inability to control craving, productivity loss, and feeling anxious and lost, which formed the Smartphone Addiction Scale. Results show that the higher one scored in loneliness and shyness, the higher the likelihood one would be addicted to smartphone. Furthermore, this study shows the most powerful predictor inversely affecting both bonding and bridging social capital was loneliness. Moreover, this study presents clear evidence that the use of smartphones for different purposes (especially for information seeking, sociability, and utility) and the exhibition of different addiction symptoms (such as preoccupation and feeling anxious and lost) significantly impacted social capital building. The significant links between smartphone addiction and smartphone usage, loneliness, and shyness have clear implications for treatment and intervention for parents, educators, and policy makers. Suggestions for future research are discussed.
Object-oriented design for C++
1. Preface. 2. Data Abstraction. 3. Inheritance. 4. Dynamic Binding. 5. Parametry. 6. Type as Object. 7. Pointers to Member Functions. 8. Design. 9. Case Study-Graphics Editor. 10. A Text Editor: Requirements. 11. Buffers, Sloops, and Yachts. 12. Extending the Text Editor.
INFLUENCE OF SOCIAL NETWORKING ON THE STUDY HABITS AND PERFORMANCE OF STUDENTS IN A STATE UNIVERSITY
This study aimed to ascertain the influence of social networking on the study habits and academic performance of tertiary students of the West Visayas State University (WVSU) System. It likewise aimed to determine the significant differences on the extent of influence of social networking on the study habits and academic performance of the students when they were grouped as to age, sex, socio economic status and educational attainment of parents; as well as to ascertain the significant relationships among the extent of influence of social networking and students’ study habits and their academic performance. This study utilized the descriptive-correlational method in describing how social networking influenced the study habits and academic performance of the students. Two hundred thirty five (235) graduating students taking Bachelor of Science in Information Technology (BS InfoTech) at WVSU were utilized as respondents of the study. Researcher-made and duly-validated instruments such as questionnaire checklists that described the influence of social networking and the status of students’ study habits and the WVSU terminal competencies assessment to measure their academic performance were used to gather data. Means and standard deviations were used to describe the influence of social networking on the study habits and academic performance of the students. The t-test and ANOVA were used to assess the significant differences on the influence of social networking on the respondents’ study habits and academic performance and Pearson’sr correlation was used to test the significant relationships among the extent of influence of social networking, students’ study habits and their academic performance. Results revealed a high extent of influence of social networking on the respondents regardless of age, sex, socio economic status, and educational attainment of their parents. The status of the students’ study habits was also high while their level of academic performance was basic. There were significant differences in the level of academic performance of students when classified as to age, socio economic status and educational attainment of parent while significant relationship existed between the extent of influence of social networking and the status of the study habits of the respondents and between the respondents’ extent of influence of social networking and the level of their academic performance.
Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010)
The simple, but general formal theory of fun and intrinsic motivation and creativity (1990-2010) is based on the concept of maximizing intrinsic reward for the active creation or discovery of novel, surprising patterns allowing for improved prediction or data compression. It generalizes the traditional field of active learning, and is related to old, but less formal ideas in aesthetics theory and developmental psychology. It has been argued that the theory explains many essential aspects of intelligence including autonomous development, science, art, music, and humor. This overview first describes theoretically optimal (but not necessarily practical) ways of implementing the basic computational principles on exploratory, intrinsically motivated agents or robots, encouraging them to provoke event sequences exhibiting previously unknown, but learnable algorithmic regularities. Emphasis is put on the importance of limited computational resources for online prediction and compression. Discrete and continuous time formulations are given. Previous practical, but nonoptimal implementations (1991, 1995, and 1997-2002) are reviewed, as well as several recent variants by others (2005-2010). A simplified typology addresses current confusion concerning the precise nature of intrinsic motivation.
Deep Learning for Laser Based Odometry Estimation
In this paper we take advantage of recent advances in deep learning techniques focused on image classification to estimate transforms between consecutive point clouds. A standard technique for feature learning is to use convolutional neural networks. Leveraging this technique can help with one of the biggest challenges in robotic motion planning, real time odometry. Sensors have advanced in recent years to provide vast amounts of precise environmental data, but localization methods can have a difficult time efficiently parsing these large quantities. In order to address this hurdle we utilize convolution neural networks for reducing the state space of the laser scan. We implement our network in the Theano framework with the Keras wrapper. Input data is collected from a VLP-16 in both small office and large open environments. We present the results of our experiments on varying network configurations. Our approach shows promising results, achieving (per direction) accuracy within 10 cm and an average network prediction time of 4.58 ms.
A subject identification method based on term frequency technique
The analyzing and extracting important information from a text document is crucial and has produced interest in the area of text mining and information retrieval. This process is used in order to notice particularly in the text. Furthermore, on view of the readers that people tend to read almost everything in text documents to find some specific information. However, reading a text document consumes time to complete and additional time to extract information. Thus, classifying text to a subject can guide a person to find relevant information. In this paper, a subject identification method which is based on term frequency to categorize groups of text into a particular subject is proposed. Since term frequency tends to ignore the semantics of a document, the term extraction algorithm is introduced for improving the result of the extracted relevant terms from the text. The evaluation of the extracted terms has shown that the proposed method is exceeded other extraction techniques.
Second-line dovitinib (TKI258) in patients with FGFR2-mutated or FGFR2-non-mutated advanced or metastatic endometrial cancer: a non-randomised, open-label, two-group, two-stage, phase 2 study.
BACKGROUND Activating FGFR2 mutations are found in 10-16% of primary endometrial cancers and provide an opportunity for targeted therapy. We assessed the safety and activity of dovitinib, a potent tyrosine-kinase inhibitor of fibroblast growth factor receptors, VEGF receptors, PDGFR-β, and c-KIT, as second-line therapy both in patients with FGFR2-mutated (FGFR2(mut)) endometrial cancer and in those with FGFR2-non-mutated (FGFR2(non-mut)) endometrial cancer. METHODS In this phase 2, non-randomised, two-group, two-stage study, we enrolled adult women who had progressive disease after first-line chemotherapy for advanced or metastatic endometrial cancer from 46 clinical sites in seven countries. We grouped women according to FGFR2 mutation status and gave all women dovitinib (500 mg per day, orally, on a 5-days-on and 2-days-off schedule) until disease progression, unacceptable toxicity, death, or study discontinuation for any other reason. The primary endpoint was proportion of patients in each group who were progression-free at 18 weeks. For each group, the second stage of the trial (enrolment of 20 additional patients) could proceed if at least eight of the first 20 treated patients were progression free at 18 weeks. Activity was assessed in all enrolled patients and safety was assessed in all patients who received at least one dose of dovitinib. The completed study is registered with ClinicalTrials.gov, number NCT01379534. FINDINGS Of 248 patients with FGFR2 prescreening results, 27 (11%) had FGFR2(mut) endometrial cancer. Between Feb 17, 2012, and Dec 13, 2013, we enrolled 22 patients in the FGFR2(mut) group and 31 patients in the FGFR2(non-mut) group. Seven (31·8%, 95% CI 13·9-54·9) patients in the FGFR2(mut) group and nine (29·0%, 14·2-48·0) in the FGFR2(non-mut) group were progression-free at 18 weeks. On the basis of predefined criteria, neither group continued to stage two: seven (35%) of the first 20 patients in the FGFR2(mut) group were progression free at 18 weeks, as were five (25%) of the first 20 in the FGFR2(mut) population. Rates of treatment-emergent adverse events were similar between groups and events were most frequently gastrointestinal. Overall, the most common grade 3 or 4 adverse events suspected to be related to the study drug were hypertension (nine patients; 17%) and diarrhoea (five; 9%). The most frequently reported serious adverse events suspected to be related to study drug were pulmonary embolism (four patients; 8%), vomiting (four; 8%), dehydration (three; 6%), and diarrhoea (three; 6%). Only one death was deemed to be treatment-related: one patient in the FGFR2(non-mut) group died from cardiac arrest with contributing reason of pulmonary embolism (grade 4, suspected to be study drug related) 4 days previously. INTERPRETATION Second-line dovitinib in FGFR2(mut) and FGFR2(non-mut) advanced or metastatic endometrial cancer had single-agent activity, although it did not reach the prespecified study criteria. Observed treatment effects seemed independent of FGFR2 mutation status. These data should be considered exploratory and additional studies are needed. FUNDING Novartis Pharmaceuticals.
Solvent isotope effects in the oxidation of dipeptides by aqueous chlorine
A kinetic study of the mechanism of oxidation of Ala-Gly and Pro-Gly by aqueous chlorine has been carried out. Among other experimental facts, the deuterium solvent isotope effects were used to clarify the mechanisms involved. In a first stage, N-chlorination takes place, and then the (N-Cl)-dipeptide decomposes through two possible mechanisms, depending on the acidity of the medium. The initial chlorination step shows a small isotope effect. In alkaline medium, two consecutive processes take place: first, the general base-catalyzed formation of an azomethine (β ca. 0.27), which has an inverse deuterium solvent isotope effect (kOH-/kOD- ~ 0.8). In a second step, the hydrolysis of the azomethine intermediate takes place, which is also general base-catalyzed, without deuterium solvent isotope effect, the corresponding uncatalyzed process having a normal deuterium solvent isotope effect (kH2O/kD2O ~ 2). In acid medium, the (N-Cl)-dipeptide undergoes disproportionation to a (N,N)-di-Cl-dipeptide, the very fas...
Potential effectiveness of three different treatment approaches to improve minimal to moderate arm and hand function after stroke--a pilot randomized clinical trial.
OBJECTIVE To test a study design and explore the feasibility and potential effects of conventional neurological therapy, constraint induced therapy and therapeutic climbing to improve minimal to moderate arm and hand function in patients after a stroke. METHOD A pilot study with six-month follow-up in patients after stroke with minimal to moderate arm and hand function admitted for inpatient rehabilitation was performed. Participants were randomly allocated to one of three treatment approaches. Main outcomes were improvement of arm and hand function and adverse effects. RESULTS 283 patients with stroke were screened for inclusion over a two-year period, out of which fourtyfour were included. All patients could be treated according to the protocol. Improvement of arm and hand function was significantly higher in conventional neurological therapy and constraint induced therapy compared with therapeutic climbing at discharge, and at six months follow-up (P < 0.05, effect size = 0.56-0.76). No significant differences in arm and hand function were observed between constraint induced therapy and conventional neurological therapy. Constraint induced therapy participants were significantly less at risk of developing shoulder pain at six months follow-up compared with the other participants (P < 0.05, effect size = 0.82 and 1.79, respectively). CONCLUSIONS The study design needs adaptation to accommodate the stringent inclusion criteria leading to prolonged study duration. Constraint induced therapy seems to be the optimal approach to improve arm and hand function and minimize the risk of shoulder pain for patients with minimal to moderate arm hand function after stroke in the intermediate term.
A Fully Integrated Shark-Fin Antenna for MIMO-LTE, GPS, WLAN, and WAVE Applications
In this study, a three-dimensional compact antenna solution for the automotive industry is proposed. The antenna solution is designed to fit in a shark-fin case and is easily fabricated from a printed circuit board and a metal sheet with low-cost process and materials. The antenna solution covers Long Term Evolution (LTE), GPS, WLAN, and Wireless Access in the Vehicular Environment (WAVE) bands (850 MHz, 1575 MHz, 2.4 GHz, and 5.9 GHz, respectively). The planar inverted-F antennas are used as multiple-input–multiple-output antennas for the LTE band due to their low-profile structure. Modified planar monopoles are used to obtain omnidirectional radiation patterns for WLAN and WAVE bands. Antenna characteristics such as return loss, isolation, and radiation pattern have been simulated and measured to confirm the possibility for use in automotive applications.
A Survey on Trust Modeling
The concept of trust and/or trust management has received considerable attention in engineering research communities as trust is perceived as the basis for decision making in many contexts and the motivation for maintaining long-term relationships based on cooperation and collaboration. Even if substantial research effort has been dedicated to addressing trust-based mechanisms or trust metrics (or computation) in diverse contexts, prior work has not clearly solved the issue of how to model and quantify trust with sufficient detail and context-based adequateness. The issue of trust quantification has become more complicated as we have the need to derive trust from complex, composite networks that may involve four distinct layers of communication protocols, information exchange, social interactions, and cognitive motivations. In addition, the diverse application domains require different aspects of trust for decision making such as emotional, logical, and relational trust. This survey aims to outline the foundations of trust models for applications in these contexts in terms of the concept of trust, trust assessment, trust constructs, trust scales, trust properties, trust formulation, and applications of trust. We discuss how different components of trust can be mapped to different layers of a complex, composite network; applicability of trust metrics and models; research challenges; and future work directions.
A review of fatigue in people with HIV infection.
Fatigue is often cited by clinicians as a debilitating symptom suffered by the many who are infected with HIV. This article provides a review of HIV-related fatigue, including research on possible physiological causes such as anemia, CD4 count, impaired liver function, impaired thyroid function, and cortisol abnormalities. Psychological causes of fatigue, particularly depression, are reviewed as well. Measurement issues, such as the use of inappropriate tools, the problem of measuring the presence or absence of fatigue, and the use of tools developed for other groups of patients, are reviewed. The need for a comprehensive fatigue tool that is appropriate for people with HIV is discussed. Current treatment research, including thyroid replacement, hyperbaric oxygen, and dextroamphetamine, is presented. Finally, the implications for further research, including the need for qualitative studies to learn more about the phenomenon, develop an instrument to measure fatigue, and examine variables together to get a complete picture of this complex concept, are reviewed.
Design of a New Hysteretic PWM Controller for All Types of DC-to-DC Converters
A new control method using a hysteretic PWM controller for all types of converters and its proper design method are presented. The triangular voltage obtained from a simple RC network connected between comparator output and converter output is superimposed to the output voltage and as a feedback signal to a hysteretic comparator. Since the hysteretic PWM controller essentially has derivative characteristics and has no error amplifier, the presented method provides no steady-state error voltage on the output and excellent dynamic performances for the load current transient by choosing proper values of time constants in the RC network. Performances of the proposed controller are experimentally verified for the buck, buck-boost and boost converters.
A methodology for fitting and validating metamodels in simulation
This expository paper discusses the relationships among metamodels, simulation models, and problem entities. A metamodel or response surface is an approximation of the input/output function implied by the underlying simulation model. There are several types of metamodel: linear regression, splines, neural networks, etc. This paper distinguishes between fitting and validating a metamodel. Metamodels may have different goals: (i) understanding, (ii) prediction, (iii) optimization, and (iv) verification and validation. For this metamodeling, a process with thirteen steps is proposed. Classic design of experiments (DOE) is summarized, including standard measures of fit such as the R-square coefficient and cross-validation measures. This DOE is extended to sequential or stagewise DOE. Several validation criteria, measures, and estimators are discussed. Metamodels in general are covered, along with a procedure for developing linear regression (including polynomial) metamodels.
Should we screen for globin gene mutations in blood samples with mean corpuscular volume (MCV) greater than 80 fL in areas with a high prevalence of thalassaemia?
AIMS To investigate whether it is worthwhile, in areas where thalassaemia is common, to screen for globin gene mutations in subjects with a mean corpuscular volume (MCV) above 80 fL, especially in partners of known thalassaemia carriers. METHODS Blood samples from 95 subjects with MCV between 80 and 85 fL were screened for the presence of alpha globin gene mutations and the haemoglobin (Hb) E mutation. RESULTS Thirty four subjects harboured globin gene mutations. Of these, 31 had deletions of one alpha globin gene, one had Hb Constant Spring, and three had Hb E mutations. CONCLUSION Based on the above figures and known prevalence rates of thalassaemia carriers, it would seem worthwhile to screen for globin gene mutations in partners of known thalassaemia carriers, regardless of MCV, to identify pregnancies at risk of Hb H disease or Hb E/beta thalassaemia.
A broadband printed bow-tie antenna with a simplified balanced feed
A broadband printed bow-tie antenna with a simplified balanced feeding network and modified tapering is presented. Microstrip and parallel-strip transmission lines printed on the substrate with high dielectric permittivity realize the proposed feeding network. The ground-plane transition between the microstrip line and the parallel-strip line is exponentially tapered so as to reduce the reflection losses and produce a balanced feed for the antenna. This printed bow-tie antenna achieves a 68% measured bandwidth and a stable radiation pattern within the X-band. Commercial FEM software is used for optimization of the bow-tie antenna and the simulation results agree very well with the experiment. © 2005 Wiley Periodicals, Inc. Microwave Opt Technol Lett 47: 534 –536, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.21221
Exploiting geographical influence for collaborative point-of-interest recommendation
In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.
Generalized Analytical Design Equations for Variable Slope Class-E Power Amplifiers
The Class-E power amplifier is widely used because of its high efficiency, resulting from switching at zero voltage and zero slope of the switch voltage. In this paper, we extend our general analytical solutions for the Class-E power amplifier to the ideal single-ended Variable Slope Class-E (Class-EVS) power amplifier which is switching at zero voltage but not necessarily at zero slope. It is shown that a Class-EVS power amplifier can have higher tolerance to switch (transistor) output capacitance compared to the normal Class-E power amplifier; this higher tolerance can be exchanged for higher drain efficiency. The presented design equations for Class-EVS power amplifiers give more degrees of freedom in the design and optimization of switching RF power amplifiers.
Coupled Multi-Layer Attentions for Co-Extraction of Aspect and Opinion Terms
The task of aspect and opinion terms co-extraction aims to explicitly extract aspect terms describing features of an entity and opinion terms expressing emotions from user-generated texts. To achieve this task, one effective approach is to exploit relations between aspect terms and opinion terms by parsing syntactic structure for each sentence. However, this approach requires expensive effort for parsing and highly depends on the quality of the parsing results. In this paper, we offer a novel deep learning model, named coupled multi-layer attentions. The proposed model provides an end-to-end solution and does not require any parsers or other linguistic resources for preprocessing. Specifically, the proposed model is a multilayer attention network, where each layer consists of a couple of attentions with tensor operators. One attention is for extracting aspect terms, while the other is for extracting opinion terms. They are learned interactively to dually propagate information between aspect terms and opinion terms. Through multiple layers, the model can further exploit indirect relations between terms for more precise information extraction. Experimental results on three benchmark datasets in SemEval Challenge 2014 and 2015 show that our model achieves stateof-the-art performances compared with several baselines.
Liquid biopsy: monitoring cancer-genetics in the blood
Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.
Simulation optimization: simulation-based optimization
In this tutorial we present an introduction to simulation-based optimization, which is, perhaps, the most important new simulation technology in the last five years. We give a precise statement of the problem being addressed and also experimental results for two commercial optimization packages applied to a manufacturing example with seven decision variables.
Deep reverse tone mapping
Inferring a high dynamic range (HDR) image from a single low dynamic range (LDR) input is an ill-posed problem where we must compensate lost data caused by under-/over-exposure and color quantization. To tackle this, we propose the first deep-learning-based approach for fully automatic inference using convolutional neural networks. Because a naive way of directly inferring a 32-bit HDR image from an 8-bit LDR image is intractable due to the difficulty of training, we take an indirect approach; the key idea of our method is to synthesize LDR images taken with different exposures (i.e., bracketed images) based on supervised learning, and then reconstruct an HDR image by merging them. By learning the relative changes of pixel values due to increased/decreased exposures using 3D deconvolutional networks, our method can reproduce not only natural tones without introducing visible noise but also the colors of saturated pixels. We demonstrate the effectiveness of our method by comparing our results not only with those of conventional methods but also with ground-truth HDR images.
Position-aware activity recognition with wearable devices
Reliable human activity recognition with wearable devices enables the development of human-centric pervasive applications.We aim to develop a robust wearable-based activity recognition system for real life situations where the device position is up to the user or where a user is unable to collect initial training data. Consequently, in this work we focus on the problem of recognizing the on-body position of the wearable device ensued by comprehensive experiments concerning subject-specific and cross-subjects activity recognition approaches that rely on acceleration data. We introduce a device localization method that predicts the on-body position with an F-measure of 89% and a cross-subjects activity recognition approach that considers common physical characteristics. In this context, we present a real world data set that has been collected from 15 participants for 8 common activities where they carried 7 wearable devices in different on-body positions. Our results show that the detection of the device position consistently improves the result of activity recognition for common activities. Regarding cross-subjects models, we identified the waist as the most suitable device location at which the acceleration patterns for the same activity across several people are most similar. In this context, our results provide evidence for the reliability of physical characteristics based cross-subjectsmodels. © 2017 Elsevier B.V. All rights reserved.
Scalability analysis of three monitoring and information systems: MDS2, R-GMA, and Hawkeye
Monitoring and information system (MIS) implementations provide data about available resources and services within a distributed system, or Grid. A comprehensive performance evaluation of an MIS can aid in detecting potential bottlenecks, advise in deployment, and help improve future system development. In this paper, we analyze and compare the performance of three implementations in a quantitative manner: the Globus Toolkit® Monitoring and Discovery Service (MDS2), the European DataGrid Relational Grid Monitoring Architecture (R-GMA), and the Condor project’s Hawkeye. We use the NetLogger toolkit to instrument the main service components of each MIS and conduct four sets of experiments to benchmark their scalability with respect to the number of users, the number of resources, and the amount of data collected. Our study provides quantitative measurements comparable across all systems. We also find performance bottlenecks and identify how they relate to the design goals, underlying architectures, and implementation technologies of the corresponding MIS, and we present guidelines for deploying monitoring and information systems in practice.
Renal resistive index better predicts the occurrence of acute kidney injury than cystatin C.
The objective of this study was to determine the predictive value of the renal resistive index (RI) and cystatin C values in serum (SCys) and urine (UCys) in the development of acute kidney injury (AKI) in critically ill patients with severe sepsis or polytrauma. This was a prospective, double-center, descriptive study. There were 58 patients with severe sepsis (n= 28) or polytrauma (n = 30). Renal resistive index, SCys, and UCys were measured within 12 h following admission (day 1 [D1]) to the intensive care unit. Renal function was assessed using the AKI network classification: On day 3 (D3), 40 patients were at stage 0 or 1, and 18 were at stage 2 or 3. Patients with AKI stage 2 or 3 had significantly higher RI (0.80 vs. 0.66, P < 0.0001), SCys (1.23 vs. 0.68 mg/L, P = 0.0002), and UCys (3.32 vs. 0.09 mg/L, P = 0.0008). They also had higher Simplified Acute Physiology Score II, arterial lactate level, and intensive care unit mortality. In multivariate analysis, an RI of greater than 0.707 on D1 was the only parameter predictive of the development of AKI stage 2 or 3 on D3 (P = 0.0004). In the subgroup of patients with AKI stage 2 or 3 on D1, RI remained the only parameter associated with persistent AKI on D3 (P = 0.016). In multivariate analysis comparing the predictive value of RI, SCys, and UCys, RI was the only parameter predictive of AKI stage 2 or 3 on D3. Renal resistive index seems to be a promising tool to assess the risk of AKI.
Behavioral dynamics on the web: Learning, modeling, and prediction
The queries people issue to a search engine and the results clicked following a query change over time. For example, after the earthquake in Japan in March 2011, the query japan spiked in popularity and people issuing the query were more likely to click government-related results than they would prior to the earthquake. We explore the modeling and prediction of such temporal patterns in Web search behavior. We develop a temporal modeling framework adapted from physics and signal processing and harness it to predict temporal patterns in search behavior using smoothing, trends, periodicities, and surprises. Using current and past behavioral data, we develop a learning procedure that can be used to construct models of users' Web search activities. We also develop a novel methodology that learns to select the best prediction model from a family of predictive models for a given query or a class of queries. Experimental results indicate that the predictive models significantly outperform baseline models that weight historical evidence the same for all queries. We present two applications where new methods introduced for the temporal modeling of user behavior significantly improve upon the state of the art. Finally, we discuss opportunities for using models of temporal dynamics to enhance other areas of Web search and information retrieval.
Prospects and Challenges in Algal Biotechnology
In this chapter, event-based control approaches for microalgae culture in industrial reactors are evaluated. Those control systems are applied to regulate the microalgae culture growth conditions such as pH and dissolved oxygen concentration. The analyzed event-based control systems deal with sensor and actuator deadbands approaches in order to provide the desired properties of the controller. Additionally, a selective event-based scheme is evaluated for simultaneous control of pH and dissolved oxygen. In such configurations, the event-based approach provides the possibility to adapt the control system actions to the dynamic state of the controlled bioprocess. In such a way, the event-based control algorithm allows to establish a tradeoff between control performance and number of process update actions. This fact can be directly related with reduction of CO2 injection times, what is also reflected in CO2 losses. The application of selective event-based scheme allows the improved biomass productivity, since the controlled variables are kept within the limits for an optimal photosynthesis rate. Moreover, such a control scheme allows effective CO2 utilization and aeration system energy minimization. The analyzed control system configurations are evaluated for both tubular and raceway photobioreactors to proove its viability for different reactor configurations as well as control system objectives. Additionally, control performance indexes have been used to show the efficiency of the event-based control approaches. The obtained results demonA. Pawlowski (✉) ⋅ S. Dormido Department of Computer Science and Automatic Control, UNED, Madrid, Spain e-mail: [email protected] S. Dormido e-mail: [email protected] J.L. Guzmán ⋅ M. Berenguel Department of Informatics, University of Almería, ceiA3, CIESOL, Almería, Spain e-mail: [email protected]
Land Reform in Southern and Eastern Africa : Key issues for strengthening women ' s access to and rights in land
Report on a desktop study commissioned by the Food and Agriculture Organization (FAO) This paper was prepared under contract with the Food and Agriculture Organization of the United Nations (FAO). The positions and opinions presented are those of the author alone, and are not intended to represent the views of FAO. 2 CONTENTS List of Tables 4 Abbreviations used in the text 4 Acknowledgements 4 SECTION ONE: INTRODUCTION 5 1.1 The Project Brief 5 Background to the study 5 Objectives of the study 6 Geographic scope of the study 7 1.2 Structure of the Report 7 SECTION TWO: THE CONTEXT FOR LAND REFORM 8 2.1 The changing land dispensation 8 The historical legacy of colonialism 8 Current land distribution and ownership 9 Dual legal and land tenure systems 10 Redistributive land reform in Namibia, South Africa and Zimbabwe 13 Women's access to land 15 Private freehold land 16 Communal areas 17 The position of widows 18 Women's land rights in situations of war and social disruption 19 2.2 The socioeconomic context for land reform 20 Economic and human development indicators 20 Agriculture 22 Women in agriculture 24 Urbanisation 26 2.3 HIV/AIDS and land reform 27 The scale of the pandemic in the region 27 Economic implications of the pandemic 28 The gender dynamics of HIV/AIDS 29 Implications for land reform policy 32 SECTION THREE:AN OVERVIEW OF LAND REFORM ISSUES AND DEBATES 35 3.1 Types of land reform 35 3.2 Policy issues within land reform 37 Economic arguments for land reform 37 Political and social objectives of land reform 38 3 Customary law and tenure systems 39 Traditional leadership 41 3.3 Gender equity as a policy goal in land reform 43 The gap between high-level commitments and land policy practice 43 Human rights and development perspectives 45 Independent rights in land and/or joint rights within the household 46 Statutory and/or customary law 48 Limitations of policies targeted at 'female-headed households' 50 Capacity to implement gender-sensitive land reform 51 SECTION FOUR: LAND REFORM AND WOMEN: FOUR CASE STUDIES 52 4.1 Kenya 52 4.2 South Africa 55 4.3 Uganda 57 4.4 Zimbabwe 6161 SECTION FIVE: CONCLUSION 64 5.1 Key findings 64 Women's rights in land 64 Land policies and land legislation 65 Efforts to formalise indigenous tenure systems 66 The changing role of traditional institutions 66 The impact of HIV/AIDS 67 Agricultural productivity and food security 67 5.2 Recommendations. 68 Research 68 Policy …
Japanese IGT subjects with high insulin response are far more frequently associated with the metabolic syndrome than those with low insulin response
Impaired glucose tolerance (IGT) represents a prediabetic state positioned somewhere between normal glucose tolerance and diabetes, which is also assumed to make individuals in this state highly susceptible to atherosclerotic disease. IGT also accounts for a highly heterogeneous population, with the condition varying from individual to individual. In this study, we stratified subjects with IGT by their insulin response and compare the pathology of IGT when it is associated with high or low insulin response to gain insight into the diverse pathology of IGT. Of the male corporate employees who underwent 75 g OGTT at the corporation's healthcare center, 150 individuals diagnosed with IGT (isolated IGT, combined IGT and IFG) comprised our study subjects. The study subjects were stratified into four quartiles by percentile AUC for insulin, and those in the 25th or less percentile were defined as the low insulin response group (n=37) vs those in the 76th or greater percentile defined as the high insulin response group (n=38), and these groups were compared. There was no significant difference observed between the two groups in regard to post-OGTT glucose response and area under the glucose curve. However, the high insulin response group was associated with higher BMI, subcutanesous fat area, uric acid levels, HOMA-β cell values, and Δinsulin/Δglucose (30 min) than the low insulin response group. The number of risk factors for the metabolic syndrome detected (as defined by the ATPIII diagnostic criteria) per subject was 2.84±0.17 and 2.08±0.20, respectively, in the high insulin response group and in the low insulin response group, with the number significantly (p<0.05) higher in the high insulin response group. Furthermore, the incidence of the metabolic syndrome as defined by the ATPIII diagnostic criteria was 63.2% (24/38) in the high insulin response group vs 32.4% (12/27) in the low insulin response group, with the incidence significantly (p<0.01) higher in the high insulin response group. Likewise, the incidence of the metabolic syndrome as defined by the Japanese diagnostic criteria was found to be significantly (p<0.05) higher in the high insulin response group at 50% (19/38) compared to 27.0% (10/37) in the low insulin response group. Our study findings suggest that IGT subjects with high insulin response and those with low insulin response vary greatly in regard to the number of atherosclerotic risk factors complicated and the frequency with which they are associated with the metabolic syndrome. It is also shown in middle-aged Japanese males that of the two forms of IGT, IGT with high insulin response is more closely linked to the pathogenesis of atherosclerotic cardiovascular disease.
Event threading within news topics
With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies <i>event threading</i>. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories. We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them.
A reflective learning report about the implementation and impacts of Psychological First Aid (PFA) in Gaza
Psychological First Aid (PFA) is the recommended immediate psychosocial response during crises. As PFA is now widely implemented in crises worldwide, there are increasing calls to evaluate its effectiveness. World Vision used PFA as a fundamental component of their emergency response following the 2014 conflict in Gaza. Anecdotal reports from Gaza suggest a range of benefits for those who received PFA. Though not intending to undertake rigorous research, World Vision explored learnings about PFA in Gaza through Focus Group Discussions with PFA providers, Gazan women, men and children and a Key Informant Interview with a PFA trainer. The qualitative analyses aimed to determine if PFA helped individuals to feel safe, calm, connected to social supports, hopeful and efficacious - factors suggested by the disaster literature to promote coping and recovery (Hobfoll et al., 2007). Results show positive psychosocial benefits for children, women and men receiving PFA, confirming that PFA contributed to: safety, reduced distress, ability to engage in calming practices and to support each other, and a greater sense of control and hopefulness irrespective of their adverse circumstances. The data shows that PFA formed an important part of a continuum of care to meet psychosocial needs in Gaza and served as a gateway for addressing additional psychosocial support needs. A "whole-of-family" approach to PFA showed particularly strong impacts and strengthened relationships. Of note, the findings from World Vision's implementation of PFA in Gaza suggests that future PFA research go beyond a narrow focus on clinical outcomes, to a wider examination of psychosocial, familial and community-based outcomes.
The dawn of the liquid biopsy in the fight against cancer
Cancer is a molecular disease associated with alterations in the genome, which, thanks to the highly improved sensitivity of mutation detection techniques, can be identified in cell-free DNA (cfDNA) circulating in blood, a method also called liquid biopsy. This is a non-invasive alternative to surgical biopsy and has the potential of revealing the molecular signature of tumors to aid in the individualization of treatments. In this review, we focus on cfDNA analysis, its advantages, and clinical applications employing genomic tools (NGS and dPCR) particularly in the field of oncology, and highlight its valuable contributions to early detection, prognosis, and prediction of treatment response.
The Complete Exosome Workflow Solution: From Isolation to Characterization of RNA Cargo
Exosomes are small (30-150 nm) vesicles containing unique RNA and protein cargo, secreted by all cell types in culture. They are also found in abundance in body fluids including blood, saliva, and urine. At the moment, the mechanism of exosome formation, the makeup of the cargo, biological pathways, and resulting functions are incompletely understood. One of their most intriguing roles is intercellular communication--exosomes function as the messengers, delivering various effector or signaling macromolecules between specific cells. There is an exponentially growing need to dissect structure and the function of exosomes and utilize them for development of minimally invasive diagnostics and therapeutics. Critical to further our understanding of exosomes is the development of reagents, tools, and protocols for their isolation, characterization, and analysis of their RNA and protein contents. Here we describe a complete exosome workflow solution, starting from fast and efficient extraction of exosomes from cell culture media and serum to isolation of RNA followed by characterization of exosomal RNA content using qRT-PCR and next-generation sequencing techniques. Effectiveness of this workflow is exemplified by analysis of the RNA content of exosomes derived from HeLa cell culture media and human serum, using Ion Torrent PGM as a sequencing platform.
A study on chip thinning process for ultra thin memory devices
Telecommunication equipment that supports high-level information networks is being made portable, small and lightweight. Thus, the miniaturization of semiconductor devices is necessary, and chip thinning technologies are important key technologies to achieve it. The manufacturing steps for semiconductor devices are generally classified into steps for patterning semiconductor elements in a wafer, steps for thinning a wafer, steps for dicing semiconductor elements into chips and sealing the chips in packages. Wafers are thinned by means of mechanical in-feed grinding using a grindstone containing diamond particles, resulting in spiral grinding saw marks on the backside of the wafers. Dicing wafers always causes surface chipping, dicing saw marks on the chip side and backside chipping. Such defects on chip faces become sources of cracks, decreasing chip strength. Therefore, the manufacturing process of thin chips must achieve the requirement of no damage on all chip faces. We already proposed a novel wafer thinning process namely a dicing before grinding (DBG) and DBG + mirror finish process on 56th ECTC, and also proposed a novel flip chip process for ultra thin chip on 57th ECTC. Incidentally, the thickness required for a semiconductor chip to work has not yet been determined. In this paper, the memory chip performance test results are described.
YAGO: A Large Ontology from Wikipedia and WordNet
This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type Ontologies Information extraction K checking techniques help us keep YAGO’s precision at 95%—as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model s data
Humor-based online positive psychology interventions : A randomized placebo-controlled long-term trial
While correlational evidence exists that humor is positively associated with well-being, only few studies addressed causality. We tested the effects of five humor-based activities on happiness and depression in a placebo-controlled, self-administered online positive psychology intervention study (N = 632 adults). All of the five one-week interventions enhanced happiness, three for up to six months (i.e. three funny things, applying humor, and counting funny things), whereas there were only shortterm effects on depression (all were effective directly after the intervention). Additionally, we tested the moderating role of indicators of a person × intervention-fit and identified early changes in well-being and preference (liking of the intervention) as the most potent indicators for changes six months after the intervention. Overall, we were able to replicate existing work, but also extend knowledge in the field by testing newly developed interventions for the first time. Findings are discussed with respect to the current literature. DOI: https://doi.org/10.1080/17439760.2015.1137624 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-121848 Accepted Version Originally published at: Wellenzohn, Sara; Proyer, Rene T; Ruch, Willibald (2016). Humor-based online positive psychology interventions: A randomized placebo-controlled long-term trial. Journal of Positive Psychology, 11(6):584-594. DOI: https://doi.org/10.1080/17439760.2015.1137624 Running head: Humor-based Positive Psychology Interventions 1 Humor-based Online Positive Psychology Interventions: A Randomized Placebo-controlled Long-term Trial Sara Wellenzohn University of Zurich, Switzerland René T. Proyer Martin-Luther University Halle-Wittenberg, Germany University of Zurich, Switzerland Willibald Ruch University of Zurich, Switzerland Sara Wellenzohn is at the Department of Psychology, University of Zurich, Switzerland. René T. Proyer is at the Department of Psychology at the Martin-Luther University Halle-Wittenberg in Germany and the Department of Psychology at the University of Zurich, Switzerland. Willibald Ruch is at the Department of Psychology, University of Zurich, Switzerland This research was supported in part by grants from the Swiss National Science Foundation (SNSF; 100014_132512 and 100014_149772) awarded to RTP and WR. The authors are grateful to Dr. Frank A. Rodden for proofreading the manuscript. Correspondence concerning this article should be addressed to Sara Wellenzohn, Department of Psychology, University of Zurich, Binzmühlestrasse 14/7, 8050 Zurich, Switzerland; E-mail: [email protected] Humor-based Positive Psychology Interventions 2 Abstract While correlational evidence exists that humor is positively associated with well-being, only few studies addressed causality. We tested the effects of five humor-based activities on happiness and depression in a placebo-controlled, self-administered online positive psychology intervention (PPI) study (N = 632 adults). All of the five one-week interventions enhanced happiness, three for up to six months (i.e., three funny things, applying humor, and counting funny things), whereas there were only short-term effects on depression (all were effective directly after the intervention). Additionally, we tested the moderating role ofWhile correlational evidence exists that humor is positively associated with well-being, only few studies addressed causality. We tested the effects of five humor-based activities on happiness and depression in a placebo-controlled, self-administered online positive psychology intervention (PPI) study (N = 632 adults). All of the five one-week interventions enhanced happiness, three for up to six months (i.e., three funny things, applying humor, and counting funny things), whereas there were only short-term effects on depression (all were effective directly after the intervention). Additionally, we tested the moderating role of indicators of a person×intervention-fit and identified early changes in well-being and preference (liking of the intervention) as the most potent indicators for changes six months after the intervention. Overall, we were able to replicate existing work, but also extend knowledge in the field by testing newly developed interventions for the first time. Findings are discussed with respect to the current literature.
Transcriptional activation of long terminal repeat retrotransposon sequences in the genome of pitaya under abiotic stress.
Frequent somatic variations exist in pitaya (Hylocereus undatus) plants grown under abiotic stress conditions. Long terminal repeat (LTR) retrotransposons can be activated under stressful conditions and play key roles in plant genetic variation and evolution. However, whether LTR retrotransposons promotes pitaya somatic variations by regulating abiotic stress responses is still uncertain. In this study, transcriptionally active LTR retrotransposons were identified in pitaya after exposure to a number of stress factors, including in vitro culturing, osmotic changes, extreme temperatures and hormone treatments. In total, 26 LTR retrotransposon reverse transcriptase (RT) cDNA sequences were isolated and identified as belonging to 9 Ty1-copia and 4 Ty3-gypsy families. Several RT cDNA sequences had differing similarity levels with RTs from pitaya genomic DNA and other plant species, and were differentially expressed in pitaya under various stress conditions. LTR retrotransposons accounted for at least 13.07% of the pitaya genome. HuTy1P4 had a high copy number and low expression level in young stems of pitaya, and its expression level increased after exposure to hormones and abiotic stresses, including in vitro culturing, osmotic changes, cold and heat. HuTy1P4 may have been subjected to diverse transposon events in 13 pitaya plantlets successively subcultured for four cycles. Thus, the expression levels of these retrotransposons in pitaya were associated with stress responses and may be involved in the occurrence of the somaclonal variation in pitaya.
Sample Size Requirements for Traditional and Regression-Based Norms.
Test norms enable determining the position of an individual test taker in the group. The most frequently used approach to obtain test norms is traditional norming. Regression-based norming may be more efficient than traditional norming and is rapidly growing in popularity, but little is known about its technical properties. A simulation study was conducted to compare the sample size requirements for traditional and regression-based norming by examining the 95% interpercentile ranges for percentile estimates as a function of sample size, norming method, size of covariate effects on the test score, test length, and number of answer categories in an item. Provided the assumptions of the linear regression model hold in the data, for a subdivision of the total group into eight equal-size subgroups, we found that regression-based norming requires samples 2.5 to 5.5 times smaller than traditional norming. Sample size requirements are presented for each norming method, test length, and number of answer categories. We emphasize that additional research is needed to establish sample size requirements when the assumptions of the linear regression model are violated.
OpenAI Gym
OpenAI Gym1 is a toolkit for reinforcement learning research. It includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms. This whitepaper discusses the components of OpenAI Gym and the design decisions that went into the software.
A Systematic Method for Designing a PR Controller and Active Damping of the LCL Filter for Single-Phase Grid-Connected PV Inverters
The Proportional Resonant (PR) current controller provides gains at a certain frequency (resonant frequency) and eliminates steady state errors. Therefore, the PR controller can be successfully applied to single grid-connected PV inverter current control. On the contrary, a PI controller has steady-state errors and limited disturbance rejection capability. Compared with the Land LC filters, the LCL filter has excellent harmonic suppression capability, but the inherent resonant peak of the LCL filter may introduce instability in the whole system. Therefore, damping must be introduced to improve the control of the system. Considering the controller and the LCL filter active damping as a whole system makes the controller design method more complex. In fact, their frequency responses may affect each other. The traditional trial-and-error procedure is too time-consuming and the design process is inefficient. This paper provides a detailed analysis of the frequency response influence between the PR controller and the LCL filter regarded as a whole system. In addition, the paper presents a systematic method for designing controller parameters and the capacitor current feedback coefficient factor of LCL filter active-damping. The new method relies on meeting the stable margins of the system. Moreover, the paper also clarifies the impact of the grid on the inverter output current. Numerical simulation and a 3 kW laboratory setup assessed the feasibility and effectiveness of the proposed method. OPEN ACCESS Energies 2014, 7 3935
Interactive Realizers: A New Approach to Program Extraction from Nonconstructive Proofs
We propose a realizability interpretation of a system for quantier free arithmetic which is equivalent to the fragment of classical arithmetic without nested quantifiers, called here EM1-arithmetic. We interpret classical proofs as interactive learning strategies, namely as processes going through several stages of knowledge and learning by interacting with the “nature,” represented by the standard interpretation of closed atomic formulas, and with each other. We obtain in this way a program extraction method by proof interpretation, which is faithful with respect to proofs, in the sense that it is compositional and that it does not need any translation.
A Lyapunov approach to the stability of fractional differential equations
Lyapunov stability of fractional differential equations is addressed in this paper. The key concept is the frequency distributed fractional integrator model, which is the basis for a global state space model of FDEs. Two approaches are presented: the direct one is intuitive but it leads to a large dimension parametric problem while the indirect one, which is based on the continuous frequency distribution, leads to a parsimonious solution. Two examples, with linear and nonlinear FDEs, exhibit the main features of this new methodology. & 2010 Elsevier B.V. All rights reserved.
Attention Guided Deep Imitation Learning
SUBMISSION DETAILS Presentation Type Either Poster or Oral Presentation Presentation Abstract Summary When a learning agent attempts to imitate human visuomotor behaviors, it may benefit from knowing the human demonstrator's visual attention. Such information could clarify the goal of the demonstrator, i.e., the object being attended is the most likely target of the current action. Hence it could help the agent better infer and learn the demonstrator's underlying state representation for decision making. We collect human control actions and eye-tracking data for playing Atari games. We train a deep neural network to predict human actions, and show that including gaze information significantly improves the prediction accuracy. In addition, more biologically correct representation enhances prediction accuracy. Paper Upload (PDF) 2017_CCN_ADIL.pdf Co-author Information
Energy Efficient Routing (EER) For Reducing Congestion and Time Delay in Wireless Sensor Network
A Wireless Sensor Network (WSN) has sensor nodes which highly scalable and limited storage capability nodes. In the network, the nodes are in distributed manner and autonomous devices. The sensor node can communicate the information directly or indirectly. In WSN, the packets should be routed from source to destination within the limited power storage. The sensor nodes of WSN are highly mobile and based on the dynamic scenarios in the routing path and the network topology change frequently. A node in the routing path should be aware of the information regarding the nearest node. In traditional routing protocols, every node in the network exchanges periodic one-hop beacons. Beacons are short messages send periodically to indicate the neighbor nodes about their identification and position in the network. In the existing approach, some problems may occur during the data forwarding. Hence to overcome those problems Energy Efficient Routing (EER) approach is proposed in this paper. In the proposed modelling, the new algorithm named Discrete Delay Function (DDF) is introduced. In that algorithm, RTS/CTS message handshaking mechanism is used for data forwarding. By using this mechanism, the existing approaches’ limitations can be reduced. Simulation results show that EER scheme significantly outperforms existing protocols in wireless sensor networks with highly dynamic network topologies. Index Terms – Discrete Delay Function, Energy Efficient Routing, Geographic Adaptive Fidelity, Geographic Energy Aware Routing.
An analytical model for deflection of flexible needles during needle insertion
This paper presents a new needle deflection model that is an extension of prior work in our group based on the principles of beam theory. The use of a long flexible needle in percutaneous interventions necessitates accurate modeling of the generated curved trajectory when the needle interacts with soft tissue. Finding a feasible model is important in simulators with applications in training novice clinicians or in path planners used for needle guidance. Using intra-operative force measurements at the needle base, our approach relates mechanical and geometric properties of needle-tissue interaction to the net amount of deflection and estimates the needle curvature. To this end, tissue resistance is modeled by introducing virtual springs along the needle shaft, and the impact of needle-tissue friction is considered by adding a moving distributed external force to the bending equations. Cutting force is also incorporated by finding its equivalent sub-boundary conditions. Subsequently, the closed-from solution of the partial differential equations governing the planar deflection is obtained using Green's functions. To evaluate the performance of our model, experiments were carried out on artificial phantoms.
Asenapine for long-term treatment of bipolar disorder: a double-blind 40-week extension study.
BACKGROUND Asenapine is approved in the United States for acute treatment of manic or mixed episodes of bipolar I disorder with or without psychotic features. We report the results of long-term treatment with asenapine in patients with bipolar I disorder. METHODS Patients completing either of two 3-week efficacy trials and a subsequent 9-week double-blind extension were eligible for this 40-week double-blind extension. Patients in the 3-week trials were randomized to flexible-dose asenapine (5 or 10mg BID), placebo, or olanzapine (5-20mg QD; included for assay sensitivity only). Patients entering the extension phase maintained their preestablished treatment; those originally randomized to placebo received flexible-dose asenapine (placebo/asenapine). Safety and tolerability endpoints included adverse events (AEs), extrapyramidal symptoms, laboratory values, and anthropometric measures. Efficacy, a secondary assessment, was measured as change in Young Mania Rating Scale (YMRS) total score from 3-week trial baseline to week 52 with asenapine or olanzapine; the placebo/asenapine group was assessed for safety only. RESULTS Incidence of treatment-emergent AEs was 71.9%, 86.1%, and 79.4% with placebo/asenapine, asenapine, and olanzapine, respectively. The most frequent treatment-emergent AEs were headache and somnolence with placebo/asenapine; insomnia, sedation, and depression with asenapine; and weight gain, somnolence, and sedation with olanzapine. Among observed cases, mean ± SD changes in YMRS total score at week 52 were -28.6 ± 8.1 and -28.2 ± 6.8 for asenapine and olanzapine, respectively. LIMITATIONS The study did not have a long-term placebo group. CONCLUSIONS In this 52-week extension in patients with bipolar mania, asenapine was well tolerated and long-term maintenance of efficacy was supported.
Answering English questions by computer: a survey
Fifteen experimental English language question-answering I systems which are programmed and operating are described ) arid reviewed. The systems range from a conversation machine ~] to programs which make sentences about pictures and systems s~ which translate from English into logical calculi. Systems are ~ classified as list-structured data-based, graphic data-based, ~! text-based and inferential. Principles and methods of opera~4 tions are detailed and discussed. It is concluded that the data-base question-answerer has > passed from initial research into the early developmental ~.4 phase. The most difficult and important research questions for ~i~ the advancement of general-purpose language processors are seen to be concerned with measuring meaning, dealing with ambiguities, translating into formal languages and searching large tree structures.
The acceptance and use of a virtual learning environment in China
The success of a virtual learning environment (VLE) depends to a considerable extent on student acceptance and use of such an e-learning system. After critically assessing models of technology adoption, including the Technology Acceptance Model (TAM), TAM2, and the Unified Theory of Acceptance and Usage of Technology (UTAUT), we build a conceptual model to explain the differences between individual students in the level of acceptance and use of a VLE. This model extends TAM2 and includes subjective norm, personal innovativeness in the domain of information technology, and computer anxiety. Data were collected from 45 Chinese participants in an Executive MBA program. After performing satisfactory reliability and validity checks, the structural model was tested with the use of PLS. Results indicate that perceived usefulness has a direct effect on VLE use. Perceived ease of use and subjective norm have only indirect effects via perceived usefulness. Both personal innovativeness and computer anxiety have direct effects on perceived ease of use only. Implications are that program managers in education should not only concern themselves with basic system design but also explicitly address individual differences between VLE users. 2006 Elsevier Ltd. All rights reserved.
Molecular Organization of Reagents in the Kinetics and Catalysis of Liquid-Phase Reactions: XI. Manifestation of the Structure of Solution in the Kinetics of Water Addition to Isocyanate in Water–Dioxane Mixtures
Evidence for the structural effect of liquids associated by hydrogen bonds on the kinetics of molecular reactions was experimentally found. The kinetics of hydrolysis of (phenylaza)phenyl isocyanate in water–dioxane mixtures was studied at various temperatures and in the presence of structure-making and structure-breaking additives. The apparent order γ of reaction with respect to water concentration increased with temperature because of the partial breaking of the H-bond solution structure. It was found that the value of γ was affected by salt additives, for which positive (Et4NCl) or negative (KI) hydration is typical. This hydration resulted in strengthening or partially breaking the H-bond structure of water, respectively. It follows from the kinetic data that the addition of 0.1 mol/l Et4NCl was equivalent to a decrease in the solution temperature by 6 to 7°С, whereas the addition of 0.1 mol/l KI was equivalent to an increase in the temperature by 5 to 6°С. The effect of poly(ethylene oxide) additives (which stabilize the structure of water) on the value of γ was similar to the effect of the tetraethylammonium salt, which is characterized by positive hydration.
An Eigendecomposition Approach to Weighted Graph Matching Problems
This paper discusses an approximate solution to the weighted graph matching prohlem (WGMP) for both undirected and directed graphs. The WGMP is the problem of finding the optimum matching between two weighted graphs, which are graphs with weights at each arc. The proposed method employs an analytic, instead of a combinatorial or iterative, approach to the optimum matching problem of such graphs. By using the eigendecompoyitions of the adjacency matrices (in the case of the undirected graph matching problem) or some Hermitian matrices derived from the adjacency matrices (in the case of the directed graph matching problem), a matching close to the optimum one can be found efficiently when the graphs are sufficiently close to each other. Simulation experiments are also given to evaluate the performance of the proposed method. Index Termy-Eigendecomposition, inexact matching, structural description, structural pattern recognition, weighted graph matching.
Permanent Tooth Agenesis ( He-Zhao Deficiency ) Gene is Expressed at Sites of Tooth Formation and Maps to the Locus for KROX-26 / ZNF 22 The Human
A MULTI-PATH SIGNAL PROPAGATION MODEL FOR THE POWERLINE CHANNEL IN THE HIGH FREQUENCY RANGE
For the use of the mains networks as high speed data path for Internet, voice and data services carrier frequencies within the range from 500 kHz up to 20 MHz must be considered. The development of suitable communication systems and the planning of power line communication networks requires measurement-based models of the transfer characteristics of the mains network in the abovementioned frequency range. ' The heterogeneous structure of the mains network with numerous branches and impedance mismatching causes numerous reflections. Besides multi-path propagation with frequency-selective fading, typical power cables exhibit signal attenuation increasing with lengtli and frequency. The complex transfer function of n power line link can be described by a parametric model in the considered frequency range. Measurements of amplitude and phase response of a sample network with well-known geometry approve the validity of the model. Comparisons with measurements conducted at ,,live6' mains networks prove the validity of the model also for real network topologies.
Digital Dead Time Auto-Tuning for Maximum Efficiency Operation of Isolated DC-DC Converters
The rapidly growing use of digital control in Distributed Power Architectures (DPA) has been primarily driven by the increasing need of sophisticated Power Management functions. However, specifically designed control algorithms can also be exploited for optimizing the system performance. In this paper we present a new auto-tuning method and system that makes it possible to operate an isolated DC-DC converter at maximum efficiency. The auto-tuning performs an optimum adjustment of both primary side dead time and secondary side conduction times based on the measurement of the input current. It does not require any additional external components since current sensing functions are already implemented for power management purposes. Experimental measurements performed on an asymmetrical driven half-bridge DC-DC converter demonstrate the effectiveness of our solution and its robustness to component variations.
Influence of fake news in Twitter during the 2016 US presidential election
The dynamics and influence of fake news on Twitter during the 2016 US presidential election remains to be clarified. Here, we use a dataset of 171 million tweets in the five months preceding the election day to identify 30 million tweets, from 2.2 million users, which contain a link to news outlets. Based on a classification of news outlets curated by www.opensources.co, we find that 25% of these tweets spread either fake or extremely biased news. We characterize the networks of information flow to find the most influential spreaders of fake and traditional news and use causal modeling to uncover how fake news influenced the presidential election. We find that, while top influencers spreading traditional center and left leaning news largely influence the activity of Clinton supporters, this causality is reversed for the fake news: the activity of Trump supporters influences the dynamics of the top fake news spreaders. The influence of 'fake news’, spread via social media, has been much discussed in the context of the 2016 US presidential election. Here, the authors use data on 30 million tweets to show how content classified as fake news diffused on Twitter before the election.
Interactive Full Image Segmentation
We address the task of interactive full image annotation, where the goal is to produce accurate segmentations for all object and stuff regions in an image. To this end we propose an interactive, scribble-based annotation framework which operates on the whole image to produce segmentations for all regions. This enables the annotator to focus on the largest errors made by the machine across the whole image, and to share corrections across nearby regions. Furthermore, we adapt Mask-RCNN [21] into a fast interactive segmentation framework and introduce a new instance-aware loss measured at the pixel-level in the full image canvas, which lets predictions for nearby regions properly compete. Finally, we compare to interactive single object segmentation on the on the COCO panoptic dataset [11, 25, 32]. We demonstrate that, at a budget of four extreme clicks and four corrective scribbles per region, our interactive full image segmentation approach leads to a 5% IoU gain, reaching 90% IoU.
BOYS WILL BE BOYS : GENDER , OVERCONFIDENCE , AND COMMON STOCK INVESTMENT
Theoretical models predict that overconŽdent investors trade excessively. We test this prediction by partitioning investors on gender. Psychological research demonstrates that, in areas such as Žnance, men are more overconŽdent than women. Thus, theory predicts that men will trade more excessively than women. Using account data for over 35,000 households from a large discount brokerage, we analyze the common stock investments of men and women from February 1991 through January 1997. We document that men trade 45 percent more than women. Trading reduces men’s net returns by 2.65 percentage points a year as opposed to 1.72 percentage points for women.
Real-time bidirectional path tracing via rasterization
Global illumination drastically improves visual realism of interactive applications. Although many interactive techniques are available, they have some limitations or employ coarse approximations. For example, general instant radiosity often has numerical error, because the sampling strategy fails in some cases. This problem can be reduced by a bidirectional sampling strategy that is often used in off-line rendering. However, it has been complicated to implement in real-time applications. This paper presents a simple real-time global illumination system based on bidirectional path tracing. The proposed system approximates bidirectional path tracing by using rasterization on a commodity DirectX® 11 capable GPU. Moreover, for glossy surfaces, a simple and efficient artifact suppression technique is also introduced.
Nanomechanics and the viscoelastic behavior of carbon nanotube-reinforced polymers
........................................................................................................... iii ACKNOWLEDGMENTS........................................................................................ vi TABLE OF CONTENTS ......................................................................................... ix LIST OF FIGURES ............................................................................................... xiii LIST OF TABLES................................................................................................. xix CHAPTER 1: INTRODUCTION.............................................................................. 1 CHAPTER 2: BACKGROUND................................................................................ 8 Structure of Carbon Nanotubes...........................................................................11 Methods of Nanotube Fabrication.......................................................................17 Mechanical Properties of Carbon Nanotubes ......................................................22 Modulus .......................................................................................................22 Strength........................................................................................................24 Carbon Nanotube-Reinforced Polymers..............................................................28 Issues related to the fabrication of NRPs.......................................................28 Nanotube dispersion with the polymer ....................................................29 Nanotube orientation...............................................................................29 Load transfer across the nanotube-polymer interface...............................31 Mechanical Properties of Carbon Nanotube-Reinforced Polymers ................35 Elastic behavior ......................................................................................35 Viscoelastic behavior..............................................................................41 Other properties ......................................................................................46
A content analysis of depression-related tweets
This study examines depression-related chatter on Twitter to glean insight into social networking about mental health. We assessed themes of a random sample (n=2,000) of depression-related tweets (sent 4-11 to 5-4-14). Tweets were coded for expression of DSM-5 symptoms for Major Depressive Disorder (MDD). Supportive or helpful tweets about depression was the most common theme (n=787, 40%), closely followed by disclosing feelings of depression (n=625; 32%). Two-thirds of tweets revealed one or more symptoms for the diagnosis of MDD and/or communicated thoughts or ideas that were consistent with struggles with depression after accounting for tweets that mentioned depression trivially. Health professionals can use our findings to tailor and target prevention and awareness messages to those Twitter users in need.
Effect of C1-Esterase-inhibitor in angiotensin-converting enzyme inhibitor-induced angioedema.
OBJECTIVES/HYPOTHESIS The study objective was to generate pilot data to evaluate the effectiveness and safety of C1-esterase-inhibitor concentrate (C1-INH) compared to standard treatment in patients with angiotensin-converting enzyme inhibitor (ACEi)-induced angioedema affecting the upper aerodigestive tract. STUDY DESIGN Proof-of-concept case series with historical control. METHODS Adult patients with angioedema in the upper aerodigestive tract presenting to the emergency department were included. After establishing the diagnosis of ACEi-induced angioedema based on patient history and thorough clinical examination, all patients were administered 1,000 international units (IU) of C1-INH intravenously. A historical control group consisting of adult patients with ACEi-induced angioedema who had been treated with intravenous corticosteroids and antihistamines at the same institution over the past 8 years was used for comparison. The most important parameters assessed were the time to complete resolution of symptoms and the need for intubation or tracheotomy. RESULTS Ten patients were included in the C1-INH group and 47 in the corticosteroid/antihistamine group. The time to complete resolution of symptoms was considerably longer in the historical control group (33.1 ± 19.4 hours) than in the C1-INH group (10.1 ± 3.0 hours). No intubation or tracheotomy was needed in the C1-INH group (0/10 patients), whereas three out of the 47 historical controls required tracheotomy and two were intubated (5/47). CONCLUSION The results suggest a role for C1-INH as an effective and safe therapeutic option in patients with ACEi-induced angioedema, which needs to be confirmed by further larger and double-blinded studies. LEVEL OF EVIDENCE 4.
The role of orienting in vibrissal touch sensing
Rodents, such as rats and mice, are strongly tactile animals who explore the environment with their long mobile facial whiskers, or macrovibrissae, and orient to explore objects further with their shorter, more densely packed, microvibrissae. Although whisker motion (whisking) has been extensively studied, less is known about how rodents orient their vibrissal system to investigate unexpected stimuli. We describe two studies that address this question. In the first we seek to characterize how adult rats orient toward unexpected macrovibrissal contacts with objects and examine the microvibrissal exploration behavior following such contacts. We show that rats orient to the nearest macrovibrissal contact on an unexpected object, progressively homing in on the nearest contact point on the object in each subsequent whisk. Following contact, rats "dab" against the object with their microvibrissae at an average rate of approximately 8 Hz, which suggests synchronization of microvibrissal dabbing with macrovibrissal motion, and an amplitude of 5 mm. In study two, we examine the role of orienting to tactile contacts in developing rat pups for maintaining aggregations (huddles). We show that young pups are able to orient to contacts with nearby conspecifics before their eyes open implying an important role for the macrovibrissae, which are present from birth, in maintaining contact with conspecifics. Overall, these data suggest that orienting to tactile cues, detected by the vibrissal system, plays a crucial role throughout the life of a rat.
Posture Recognition Based on Fuzzy Logic for Home Monitoring of the Elderly
We propose in this paper a computer vision-based posture recognition method for home monitoring of the elderly. The proposed system performs human detection prior to the posture analysis; posture recognition is performed only on a human silhouette. The human detection approach has been designed to be robust to different environmental stimuli. Thus, posture is analyzed with simple and efficient features that are not designed to manage constraints related to the environment but only designed to describe human silhouettes. The posture recognition method, based on fuzzy logic, identifies four static postures and is robust to variation in the distance between the camera and the person, and to the person's morphology. With an accuracy of 74.29% of satisfactory posture recognition, this approach can detect emergency situations such as a fall within a health smart home.
OPTIMIZATION OF A WAVE CANCELLATION MULTIHULL SHIP USING CFD TOOLS
A simple CFD tool, coupled to a discrete surface representation and a gradient-based optimization procedure, is applied to the design of optimal hull forms and optimal arrangement of hulls for a wave cancellation multihull ship. The CFD tool, which is used to estimate the wave drag, is based on the zeroth-order slender ship approximation. The hull surface is represented by a triangulation, and almost every grid point on the surface can be used as a design variable. A smooth surface is obtained via a simplified pseudo-shell problem. The optimal design process consists of two steps. The optimal center and outer hull forms are determined independently in the first step, where each hull keeps the same displacement as the original design while the wave drag is minimized. The optimal outer-hull arrangement is determined in the second step for the optimal center and outer hull forms obtained in the first step. Results indicate that the new design can achieve a large wave drag reduction in comparison to the original design configuration.
VeriCon: towards verifying controller programs in software-defined networks
Software-defined networking (SDN) is a new paradigm for operating and managing computer networks. SDN enables logically-centralized control over network devices through a "controller" software that operates independently from the network hardware, and can be viewed as the network operating system. Network operators can run both inhouse and third-party SDN programs (often called applications) on top of the controller, e.g., to specify routing and access control policies. SDN opens up the possibility of applying formal methods to prove the correctness of computer networks. Indeed, recently much effort has been invested in applying finite state model checking to check that SDN programs behave correctly. However, in general, scaling these methods to large networks is challenging and, moreover, they cannot guarantee the absence of errors. We present VeriCon, the first system for verifying that an SDN program is correct on all admissible topologies and for all possible (infinite) sequences of network events. VeriCon either confirms the correctness of the controller program on all admissible network topologies or outputs a concrete counterexample. VeriCon uses first-order logic to specify admissible network topologies and desired network-wide invariants, and then implements classical Floyd-Hoare-Dijkstra deductive verification using Z3. Our preliminary experience indicates that VeriCon is able to rapidly verify correctness, or identify bugs, for a large repertoire of simple core SDN programs. VeriCon is compositional, in the sense that it verifies the correctness of execution of any single network event w.r.t. the specified invariant, and can thus scale to handle large programs. To relieve the burden of specifying inductive invariants from the programmer, VeriCon includes a separate procedure for inferring invariants, which is shown to be effective on simple controller programs. We view VeriCon as a first step en route to practical mechanisms for verifying network-wide invariants of SDN programs.
Clutch tectonics and the partial attachment of lithospheric layers
Clutch zones, in analogy to the clutch in an automobile, explain the mechanical communication between the necessarily different displacement fields of rheologicallydistinct lithospheric layers. In contrast to detachment zones, these sub-horizontal shear zones act as partial attachment zones between lithospheric layers. Two cases are possible: 1) Top-driven clutch systems, such as in extensional core complexes, in which kinematics are controlled by gravitational forces; or 2) Bottom-driven clutch systems in which kinematics are imposed from below. We focus on three different clutches that are associated with the lithosphere: the crustal brittle/ductile transition, the crust/mantle transition (lower crust), and the lithosphere/asthenosphere transition. The clutch model predicts coupled crustal (lithospheric?) deformation resulting from bulk flow of the lithospheric and asthenospheric mantle. An indication that orogenic systems are generally bottom-driven, rather than side-driven, is the pervasive sub-horizontal shearing which occurs within the lowermost crust. This model questions several aspects of classical plate tectonics by suggesting that: 1) Vertical displacement gradients are as significant as horizontal displacement gradients; 2) Areas of coherent upper crustal movement (plates?) do not encompass the entire lithosphere; and 3) Orogenic deformation results from bottom, rather than side, boundary conditions.
A Differentially-Driven Dual-Polarized Magneto-Electric Dipole Antenna
A novel differentially-driven dual-polarized antenna is proposed in this communication. It is a magneto-electric dipole antenna whose gain and beamwidth keep constant along frequency within the operation bandwidth. If the antenna is ideally symmetrical, its differential port-to-port isolation is theoretically infinite. Due to the differentially-driven scheme, its cross-polarization level can be very low. Measurement shows that the proposed antenna achieves a wide impedance bandwidth of 68% (0.95 to 1.92 GHz) for differential reflection coefficients less than -10 dB and high differential port-to-port isolation of better than 36 dB within the bandwidth. The 3-dB-gain bandwidth of the proposed antenna is 62% (1.09 to 2.08 GHz), and the radiation pattern across it is stable and unidirectional. The broadside gain within the 3-dB-gain bandwidth ranges from 6.6 to 9.6 dBi. The cross-polarization level is lower than -23 dB across the 3-dB-gain bandwidth. The proposed antenna is the first differentially-driven dual-polarized magneto-electric dipole antenna. A feeding structure is specially designed to fit the differentially-driven scheme, and also to achieve wide impedance and gain bandwidths.
Symmetry-breaking Convergence Analysis of Certain Two-layered Neural Networks with Relu Nonlinearity
In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of g(x;w) = ∑K j=1 σ(w T j x), where σ(·) is ReLU nonlinearity. We assume that the input x follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters w∗ using l2 loss. We first show that when K = 1, the nonlinear dynamics can be written in close form, and converges to w∗ with at least (1 − )/2 probability, if random weight initializations of proper standard derivation (∼ 1/ √ d) is used, verifying empirical practice [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)]. For networks with many ReLU nodes (K ≥ 2), we apply our close form dynamics and prove that when the teacher parameters {w∗ j}j=1 forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to w∗ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with l2 loss. Simulations verify our theoretical analysis.
Reproductive outcomes following induced abortion: a national register-based cohort study in Scotland
OBJECTIVE To investigate reproductive outcomes in women following induced abortion (IA). DESIGN Retrospective cohort study. SETTING Hospital admissions between 1981 and 2007 in Scotland. PARTICIPANTS Data were extracted on all women who had an IA, a miscarriage or a live birth from the Scottish Morbidity Records. A total of 120 033, 457 477 and 47 355 women with a documented second pregnancy following an IA, live birth and miscarriage, respectively, were identified. OUTCOMES Obstetric and perinatal outcomes, especially preterm delivery in a second ongoing pregnancy following an IA, were compared with those in primigravidae, as well as those who had a miscarriage or live birth in their first pregnancy. Outcomes after surgical and medical termination as well as after one or more consecutive IAs were compared. RESULTS IA in a first pregnancy increased the risk of spontaneous preterm birth compared with that in primigravidae (adjusted RR (adj. RR) 1.37, 95% CI 1.32 to 1.42) or women with an initial live birth (adj. RR 1.66, 95% CI 1.58 to 1.74) but not in comparison with women with a previous miscarriage (adj. RR 0.85, 95% CI 0.79 to 0.91). Surgical abortion increased the risk of spontaneous preterm birth compared with medical abortion (adj. RR 1.25, 95% CI 1.07 to 1.45). The adjusted RRs (95% CI) for spontaneous preterm delivery following two, three and four consecutive IAs were 0.94 (0.81 to 1.10), 1.06 (0.76 to 1.47) and 0.92 (0.53 to 1.61), respectively. CONCLUSIONS The risk of preterm birth after IA is lower than that after miscarriage but higher than that in a first pregnancy or after a previous live birth. This risk is not increased further in women who undergo two or more consecutive IAs. Surgical abortion appears to be associated with an increased risk of spontaneous preterm birth in comparison with medical termination of pregnancy. Medical termination was not associated with an increased risk of preterm delivery compared to primigravidae.
Efficient and robust pseudonymous authentication in VANET
Effective and robust operations, as well as security and privacy are critical for the deployment of vehicular ad hoc networks (VANETs). Efficient and easy-to-manage security and privacy-enhancing mechanisms are essential for the wide-spread adoption of the VANET technology. In this paper, we are concerned with this problem; and in particular, how to achieve efficient and robust pseudonym-based authentication. We design mechanisms that reduce the security overhead for safety beaconing, and retain robustness for transportation safety, even in adverse network settings. Moreover, we show how to enhance the availability and usability of privacy-enhancing VANET mechanisms: Our proposal enables vehicle on-board units to generate their own pseudonyms, without affecting the system security.
Predicting box-office success of motion pictures with neural networks
Predicting box-office receipts of a particular motion picture has intrigued many scholars and industry leaders as a difficult and challenging problem. In this study, the use of neural networks in predicting the financial performance of a movie at the box-office before its theatrical release is explored. In our model, the forecasting problem is converted into a classification problem-rather than forecasting the point estimate of box-office receipts, a movie based on its box-office receipts in one of nine categories is classified, ranging from a ‘flop’ to a ‘blockbuster.’ Because our model is designed to predict the expected revenue range of a movie before its theatrical release, it can be used as a powerful decision aid by studios, distributors, and exhibitors. Our prediction results is presented using two performance measures: average percent success rate of classifying a movie’s success exactly, or within one class of its actual performance. Comparison of our neural network to models proposed in the recent literature as well as other statistical techniques using a 10-fold cross validation methodology shows that the neural networks do a much better job of predicting in this setting. q 2005 Elsevier Ltd. All rights reserved.
An Early Pleistocene hominin mandible from Atapuerca-TD6, Spain.
We present a mandible recovered in 2003 from the Aurora Stratum of the TD6 level of the Gran Dolina site (Sierra de Atapuerca, northern Spain). The specimen, catalogued as ATD6-96, adds to the hominin sample recovered from this site in 1994-1996, and assigned to Homo antecessor. ATD6-96 is the left half of a gracile mandible belonging to a probably female adult individual with premolars and molars in place. This mandible shows a primitive structural pattern shared with all African and Asian Homo species. However, it is small and exhibits a remarkable gracility, a trait shared only with the Early and Middle Pleistocene Chinese hominins. Furthermore, none of the mandibular features considered apomorphic in the European Middle and Early Upper Pleistocene hominin lineage are present in ATD6-96. This evidence reinforces the taxonomic identity of H. antecessor and is consistent with the hypothesis of a close relationship between this species and Homo sapiens.
Brief Review of Vibration Based Machine Condition Monitoring
In the process of channeling energy into job to be performed all machines vibrate. Machines rarely break down without giving some previous warning. The signs of impeding failure are generally present long before a machine totally breaks down. When faults begin to develop in the machine, some of dynamic processes in the machine are changed as well, thereby influencing machine vibration level, temporal and spectral vibration properties. Such changes can act as an indicator for early detection and identification of developing faults. This paper briefly reviews the machine condition monitoring based on vibration data analysis. After the review of major, well established and mature approaches, new unsupervised approaches based on novelty detection are also briefly mentioned.
A Miniaturized, Dual-Band, Circularly Polarized Microstrip Antenna for Installation Into Satellite Mobile Phones
A miniaturized, dual-band, circularly polarized microstrip antenna (CPMA) for satellite mobile phones was designed. Two miniaturized radiation elements using folded structures are stacked vertically to achieve dual bands. Each radiation element shows left-handed circular polarization. The total volume of this dual-band, circularly polarized, microstrip antenna is 46 mm (length) x 24 mm (width) x 13 mm (height). The overall antenna is mounted on a phone-shaped ground plane to fit inside satellite mobile phones.
Enhancing citation recommendation with various evidences
With the tremendous amount of citations available in digital library, how to suggest citations automatically, to meet the information needs of researchers has become an important problem. In this paper, we propose a model which treats citation recommendation as a special retrieval task to address this challenge. First, users provide a target paper with some metadata to our system. Second, the system retrieves a relevant candidate citation set. Then the candidate citations are reranked by well-chosen citation evidence, such as publication time preference, self-citation preference, co-citation preference and publication reputation preference. Especially, various measures are introduced to integrate the evidence. We experimented with the proposed model on an established bibliographic corpus-ACL Anthology Network, the results show that the model is valuable in practice, and citation recommendation can be significantly improved using proposed evidences.
High frequencies of Y-chromosome haplogroup O2b-SRY465 lineages in Korea: a genetic perspective on the peopling of Korea
Koreans are generally considered a Northeast Asian group, thought to be related to Altaic-language-speaking populations. However, recent findings have indicated that the peopling of Korea might have been more complex, involving dual origins from both southern and northern parts of East Asia. To understand the male lineage history of Korea, more data from informative genetic markers from Korea and its surrounding regions are necessary. In this study, 25 Y-chromosome single nucleotide polymorphism markers and 17 Y-chromosome short tandem repeat (Y-STR) loci were genotyped in 1,108 males from several populations in East Asia. In general, we found East Asian populations to be characterized by male haplogroup homogeneity, showing major Y-chromosomal expansions of haplogroup O-M175 lineages. Interestingly, a high frequency (31.4%) of haplogroup O2b-SRY465 (and its sublineage) is characteristic of male Koreans, whereas the haplogroup distribution elsewhere in East Asian populations is patchy. The ages of the haplogroup O2b-SRY465 lineages (~9,900 years) and the pattern of variation within the lineages suggested an ancient origin in a nearby part of northeastern Asia, followed by an expansion in the vicinity of the Korean Peninsula. In addition, the coalescence time (~4,400 years) for the age of haplogroup O2b1-47z, and its Y-STR diversity, suggest that this lineage probably originated in Korea. Further studies with sufficiently large sample sizes to cover the vast East Asian region and using genomewide genotyping should provide further insights. These findings are consistent with linguistic, archaeological and historical evidence, which suggest that the direct ancestors of Koreans were proto-Koreans who inhabited the northeastern region of China and the Korean Peninsula during the Neolithic (8,000-1,000 BC) and Bronze (1,500-400 BC) Ages.
Impact of a workplace stress reduction program on blood pressure and emotional health in hypertensive employees.
OBJECTIVES This study examined the impact of a workplace-based stress management program on blood pressure (BP), emotional health, and workplace-related measures in hypertensive employees of a global information technology company. DESIGN Thirty-eight (38) employees with hypertension were randomly assigned to a treatment group that received the stress-reduction intervention or a waiting control group that received no intervention during the study period. The treatment group participated in a 16-hour program, which included instruction in positive emotion refocusing and emotional restructuring techniques intended to reduce sympathetic nervous system arousal, stress, and negative affect, increase positive affect, and improve performance. Learning and practice of the techniques was enhanced by heart rate variability feedback, which helped participants learn to self-generate physiological coherence, a beneficial physiologic mode associated with increased heart rhythm coherence, physiologic entrainment, parasympathetic activity, and vascular resonance. BP, emotional health, and workplace-related measures were assessed before and 3 months after the program. RESULTS Three months post-intervention, the treatment group exhibited a mean adjusted reduction of 10.6 mm Hg in systolic BP and of 6.3 mm Hg in diastolic BP. The reduction in systolic BP was significant in relation to the control group. The treatment group also demonstrated improvements in emotional health, including significant reductions in stress symptoms, depression, and global psychological distress and significant increases in peacefulness and positive outlook. Reduced systolic BP was correlated with reduced stress symptoms. Furthermore, the trained employees demonstrated significant increases in the work-related scales of workplace satisfaction and value of contribution. CONCLUSIONS Results suggest that a brief workplace stress management intervention can produce clinically significant reductions in BP and improve emotional health among hypertensive employees. Implications are that such interventions may produce a healthier and more productive workforce, enhancing performance and reducing losses to the organization resulting from cognitive decline, illness, and premature mortality.
Significant Permission Identification for Machine-Learning-Based Android Malware Detection
The alarming growth rate of malicious apps has become a serious issue that sets back the prosperous mobile ecosystem. A recent report indicates that a new malicious app for Android is introduced every 10 s. To combat this serious malware campaign, we need a scalable malware detection approach that can effectively and efficiently identify malware apps. Numerous malware detection tools have been developed, including system-level and network-level approaches. However, scaling the detection for a large bundle of apps remains a challenging task. In this paper, we introduce Significant Permission IDentification (SigPID), a malware detection system based on permission usage analysis to cope with the rapid increase in the number of Android malware. Instead of extracting and analyzing all Android permissions, we develop three levels of pruning by mining the permission data to identify the most significant permissions that can be effective in distinguishing between benign and malicious apps. SigPID then utilizes machine-learning-based classification methods to classify different families of malware and benign apps. Our evaluation finds that only 22 permissions are significant. We then compare the performance of our approach, using only 22 permissions, against a baseline approach that analyzes all permissions. The results indicate that when a support vector machine is used as the classifier, we can achieve over 90% of precision, recall, accuracy, and F-measure, which are about the same as those produced by the baseline approach while incurring the analysis times that are 4–32 times less than those of using all permissions. Compared against other state-of-the-art approaches, SigPID is more effective by detecting 93.62% of malware in the dataset and 91.4% unknown/new malware samples.
Electronic and Computer-Assisted Refreshable Braille Display Developed for Visually Impaired Individuals
Abstract—Braille alphabet is an important tool that enables visually impaired individuals to have a comfortable life like those who have normal vision. For this reason, new applications related to the Braille alphabet are being developed. In this study, a new Refreshable Braille Display was developed to help visually impaired individuals learn the Braille alphabet easier. By means of this system, any text downloaded on a computer can be read by the visually impaired individual at that moment by feeling it by his/her hands. Through this electronic device, it was aimed to make learning the Braille alphabet easier for visually impaired individuals with whom the necessary tests were conducted.
Elevated suicide levels associated with legalized gambling.
There has been no systematic, large-scale statistical investigation of the link between gambling and suicide, despite the suggestion of such a link from small-scale case studies. This article examines whether gamblers or those associated with them are prone to suicide and whether gaming communities experience atypically high suicide rates. Las Vegas, the premier U.S. gambling setting, displays the highest levels of suicide in the nation, both for residents of Las Vegas and for visitors to that setting. In general, visitors to and residents of major gaming communities experience significantly elevated suicide levels. In Atlantic City, abnormally high suicide levels for visitors and residents appeared only after gambling casinos were opened. The findings do not seem to result merely because gaming settings attract suicidal individuals.
Information-Based Compact Pose SLAM
Pose SLAM is the variant of simultaneous localization and map building (SLAM) is the variant of SLAM, in which only the robot trajectory is estimated and where landmarks are only used to produce relative constraints between robot poses. To reduce the computational cost of the information filter form of Pose SLAM and, at the same time, to delay inconsistency as much as possible, we introduce an approach that takes into account only highly informative loop-closure links and nonredundant poses. This approach includes constant time procedures to compute the distance between poses, the expected information gain for each potential link, and the exact marginal covariances while moving in open loop, as well as a procedure to recover the state after a loop closure that, in practical situations, scales linearly in terms of both time and memory. Using these procedures, the robot operates most of the time in open loop, and the cost of the loop closure is amortized over long trajectories. This way, the computational bottleneck shifts to data association, which is the search over the set of previously visited poses to determine good candidates for sensor registration. To speed up data association, we introduce a method to search for neighboring poses whose complexity ranges from logarithmic in the usual case to linear in degenerate situations. The method is based on organizing the pose information in a balanced tree whose internal levels are defined using interval arithmetic. The proposed Pose-SLAM approach is validated through simulations, real mapping sessions, and experiments using standard SLAM data sets.
Detecting Clickbait in Online Social Media: You Won't Believe How We Did It
In this paper, we propose an approach for the detection of clickbait posts in online social media (OSM). Clickbait posts are short catchy phrases that attract a user’s attention to click to an article. The approach is based on a machine learning (ML) classifier capable of distinguishing between clickbait and legitimate posts published in OSM. The suggested classifier is based on a variety of features, including image related features, linguistic analysis, and methods for abuser detection. In order to evaluate our method, we used two datasets provided by Clickbait Challenge 2017. The best performance obtained by the ML classifier was an AUC of 0.8, accuracy of 0.812, precision of 0.819, and recall of 0.966. In addition, as opposed to previous studies, we found that clickbait post titles are statistically significant shorter than legitimate post titles. Finally, we found that counting the number of formal English words in the given contentis useful for clickbait detection.
Music and emotion: electrophysiological correlates of the processing of pleasant and unpleasant music.
Human emotion and its electrophysiological correlates are still poorly understood. The present study examined whether the valence of perceived emotions would differentially influence EEG power spectra and heart rate (HR). Pleasant and unpleasant emotions were induced by consonant and dissonant music. Unpleasant (compared to pleasant) music evoked a significant decrease of HR, replicating the pattern of HR responses previously described for the processing of emotional pictures, sounds, and films. In the EEG, pleasant (contrasted to unpleasant) music was associated with an increase of frontal midline (Fm) theta power. This effect is taken to reflect emotional processing in close interaction with attentional functions. These findings show that Fm theta is modulated by emotion more strongly than previously believed.
Prediction of intent in robotics and multi-agent systems
Moving beyond the stimulus contained in observable agent behaviour, i.e. understanding the underlying intent of the observed agent is of immense interest in a variety of domains that involve collaborative and competitive scenarios, for example assistive robotics, computer games, robot–human interaction, decision support and intelligent tutoring. This review paper examines approaches for performing action recognition and prediction of intent from a multi-disciplinary perspective, in both single robot and multi-agent scenarios, and analyses the underlying challenges, focusing mainly on generative approaches.
Robot-assisted laparoscopic prostatectomy versus open: comparison of the learning curve of a single surgeon.
BACKGROUND AND PURPOSE Because of the increased use of robot-assisted laparoscopic prostatectomy (RALP) for the management of localized prostate cancer, surgeons in training face the issues of developing skills in both open surgery and the robotic console. This study compares prospectively the safety and efficacy of the first 50 open radical retropubic prostatectomy (RRP) procedures and the first 50 RALP procedures, performed by the same surgeon in the same institution. PATIENTS AND METHODS The patients' baseline demographic, clinical, and oncologic parameters were prospectively recorded. The study end points included oncologic outcome, functional outcomes (at 3 months), and perioperative parameters. Complications were classified according to the modified Clavien system. RESULTS No statistically significant differences were noted between the two groups in terms of preoperative patient characteristics and oncologic parameters. The operative time and mean estimated blood loss were lower in the RALP group (P<0.001), but no statistically significant difference was noted in regard to transfusion rates (P=0.362). Mean hospital stay was lower in the RALP group (P<0.001). The minor (Clavien I+II) and major (Clavien III+IV) complication rates were comparable between the two groups. The overall positive margin (PSM) rates were 20% and 18% for RRP and RALP, respectively (P=0.799), while for pT(3) disease, the PSM rates were 26.1% and 22.2%% for RRP and RALP, respectively (P=0.53). The 3-month continence rates were 88% and 90% for RRP and RALP, respectively (P=0.749). For preoperatively potent patients, 3-month potency rates were comparable between the two groups (60.6% and 62.1% in the RRP and the RALP group, respectively, P=0.893). CONCLUSION The early learning curve for RALP appears safe and results in equivalent functional and oncologic outcome, when compared with the results of open surgery.
A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play
The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.
The SPEED Cipher
SPEED is a private key block cipher. It supports three variable parameters: (1) data length — the length of a plaintext/ciphertext of SPEED can be 64, 128 or 256 bits. (2) key length — the length of an encryption/decryption key of SPEED can be any integer between 48 and 256 (inclusive) and divisible by 16. (3) rounds — the number of rounds involved in encryption/decryption can be any integer divisible by 4 but not smaller than 32. SPEED is compact, which is indicated by the fact that the object code of a straightforward implementation of SPEED in the programming language C occupies less than 3 kilo-bytes. It makes full use of current, and more importantly, emerging CPU architectures which host a large number of high-speed hardware registers directly available to application programs. Another important feature of SPEED is that it is built on recent research results on highly nonlinear cryptographic functions, as well as other counter-measures against differential and linear cryptanalytic attacks. It is hoped that the compactness, high throughput and adjustable parameters offered by SPEED, together with the fact that the cipher is in the public domain, would make it an attractive alternative cipher for security applications including electronic financial transactions. 1 Design Philosophy The aim of this paper is to introduce a private key cipher that is suitable for software implementation and takes the maximum advantage of emerging computer architectures that host an increasing number of fast internal hardware registers directly available to application programs. The cipher is called SPEED which stands for a Secure Package for Encrypting Electronic Data. Cryptographic strength of SPEED is built on recent research results on constructing highly nonlinear Boolean functions [15, 16]. Operation efficiency is an important factor that has been taken into account in the process of design. Another design goal is to provide the cipher with applicability to fast one-way hashing and efficient generation of cryptographically strong pseudo-random numbers. Encryption and pseudo-random number generation have direct applications in providing data confidentiality, whereas one-way hashing is essential for efficient authentication and digital signature. While most smart cards use 8-bit CPUs, workstations and personal computers are mainly based on 32-bit CPUs which support fast processing of 8, 16 and 32-bit data. Similarly, emerging 64-bit CPUs support efficient handling of 8, 16, 32 and 64-bit data. This results in our decision for the basic data unit for the encryption/decryption operation of SPEED to be a 8-bit, 16-bit or 32-bit word. As a plaintext/ciphertext of SPEED consists of 8 words, choosing a 8-bit word as the basic data unit results in a block cipher on 64-bit data, a 16-bit word results in a block cipher on 128-bit data, and a 32-bit word results in a block cipher on 256-bit data. The process of SPEED is composed of 4 passes, each involving 8 or more consecutive rounds. Thus similarly to RC5 [14], SPEED supports three variable parameters, namely data length, key length and the number of rounds. Relevant ideas on variable parameters were previously used in a one-way hashing algorithm called HAVAL [18]. A bit-wise nonlinear Boolean operation is employed in each round. To strengthen the cipher against the differential attack proposed by Biham and Shamir [1], a data-dependent cyclic shift is applied on the output of the operation. This technique was inspired by RC5. The use of a maximally nonlinear Boolean function in a bit-wise Boolean operation would help thwart the linear attack discovered by Matsui [9]. The remainder of this paper is organized as follows: Section 2 details the specification of SPEED, Section 3 provides background information on the round transform used in SPEED, and Section 4 discusses the construction and properties of the five nonlinear Boolean functions used in SPEED. A preliminary analysis of the strength of the cipher against cryptanalysis is reported in Section 5, while a comparison of SPEED with other ciphers in terms of its throughput (the number of bits encrypted/decrypted per unit of time) is provided in Section 6. Finally applications of SPEED in one-way hashing and pseudo-random number generation are suggested in Sections 7 and 8. 2 Description of SPEED First we introduce a few terms used in this paper. As a common practice, a byte is composed of 8 bits. As we mentioned earlier, by a word we mean a string of 8, 16 or 32 bits. All bits in a byte or a word are indexed, starting with 0, from right to left hand side. It is convenient to call right hand side bits lower bits, while left hand side bits upper bits. Three types of operations are applied to data. The first is bit-wise Boolean operations, the second is cyclic shifts (i.e., rotation) to the right or left, and the third is modular additions. In the following discussions we use w to indicate the length of (i.e, the number of bits in) a plaintext/ciphertext, ` the length of a key, and r the number of rounds. w can be chosen to be 64, 128 or 256, ` an integer between 48 and 256 (inclusive) and divisible by 16, and r an integer larger than or equal to 32 and divisible by 4. SPEED with parameters w, ` and r may be denoted by (w, `, r)SPEED, or simply by w-bit SPEED if the length of a key and the number of rounds are not concerned. In Table 1, various possible combinations of the parameters w, ` and r that would provide adequate security are suggested . It is recommended that SPEED with less than 40 rounds be used only for one-way hashing. plain/ciphertext length w 64 128 256 (in bits) key length ` (in bits) (` = 48, 64, ..., 256, ≥ 64 ≥ 64 ≥ 64 divisible by 16) number of rounds r (r = 32, 36, 40, ..., ≥ 64 ≥ 48 ≥ 48 divisible by 4) Table 1. SPEED Parameters for Adequate Security (r < 48 may be chosen only when SPEED is used for one-way hashing) 2.1 Encryption Given a key K of ` bits, SPEED scrambles a plaintext M of w bits into a ciphertext C of the same length. Flow of Data The flow of data in SPEED is depicted in Figure 1. A cryptographic key K, which is a string of ` bits, is first expanded by the key scheduling function into four sub-keys K1, K2, K3 and K4. Each Ki, i = 1, 2, 3, 4, consists of r 4 words or round keys where r 4 indicates the number of rounds in each pass. A plaintext M is internally represented as 8 words, each w8 bits. These 8 words are processed by P1, P2, P3 and P4 consecutively. Each Pi, i = 1, 2, 3, 4, is called a pass and involves a sub-key Ki. The output C of P4 represents the ciphertext of the original plaintext M . Four Internal Passes As can be seen from Figure 2, the four internal passes Pi, i = 1, 2, 3, 4, all operate in a similar fashion, although each pass employs a different sub-key, as well as a different nonlinear function for bit-wise Boolean operations. The four nonlinear bit-wise operations are shown in Table 2 in the form of logic “sum (XOR) of product (AND)”. 1 See also a recent report by Blaze et al [2] which suggests that the length of a key for a private key cipher should be at least 75 to provide adequate security for critical commercial applications.
HFST - Framework for Compiling and Applying Morphologies
HFST–Helsinki Finite-State Technology (hfst.sf.net) is a framework for compiling and applying linguistic descriptions with finite-state methods. HFST currently connects some of the most important finite-state tools for creating morphologies and spellers into one open-source platform and supports extending and improving the descriptions with weights to accommodate the modeling of statistical information. HFST offers a path from language descriptions to efficient language applications in key environments and operating systems. HFST also provides an opportunity to exchange transducers between different software providers in order to get the best out of each finite-state library.
A Review of Wearable Technologies for Elderly Care that Can Accurately Track Indoor Position, Recognize Physical Activities and Monitor Vital Signs in Real Time
Rapid growth of the aged population has caused an immense increase in the demand for healthcare services. Generally, the elderly are more prone to health problems compared to other age groups. With effective monitoring and alarm systems, the adverse effects of unpredictable events such as sudden illnesses, falls, and so on can be ameliorated to some extent. Recently, advances in wearable and sensor technologies have improved the prospects of these service systems for assisting elderly people. In this article, we review state-of-the-art wearable technologies that can be used for elderly care. These technologies are categorized into three types: indoor positioning, activity recognition and real time vital sign monitoring. Positioning is the process of accurate localization and is particularly important for elderly people so that they can be found in a timely manner. Activity recognition not only helps ensure that sudden events (e.g., falls) will raise alarms but also functions as a feasible way to guide people's activities so that they avoid dangerous behaviors. Since most elderly people suffer from age-related problems, some vital signs that can be monitored comfortably and continuously via existing techniques are also summarized. Finally, we discussed a series of considerations and future trends with regard to the construction of "smart clothing" system.