title
stringlengths
8
300
abstract
stringlengths
0
10k
Multilevel Voltage-Source-Converter Topologies for Industrial Medium-Voltage Drives
This paper presents a technology review of voltage-source-converter topologies for industrial medium-voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high-power voltage-source inverter and the most used multilevel-inverter topologies, including the neutral-point-clamped, cascaded H-bridge, and flying-capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation-method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.
Additional Multi-Touch Attribution for Online Advertising
Multi-Touch Attribution studies the effects of various types of online advertisements on purchase conversions. It is a very important problem in computational advertising, as it allows marketers to assign credits for conversions to different advertising channels and optimize advertising campaigns. In this paper, we propose an additional multi-touch attribution model (AMTA) based on two obvious assumptions: (1) the effect of an ad exposure is fading with time and (2) the effects of ad exposures on the browsing path of a user are additive. AMTA borrows the techniques from survival analysis and uses the hazard rate to measure the influence of an ad exposure. In addition, we both take the conversion time and the intrinsic conversion rate of users into consideration to generate the probability of a conversion. Experimental results on a large real-world advertising dataset illustrate that the our proposed method is superior to state-of-the-art techniques in conversion rate prediction and the credit allocation based on AMTA is reasonable.
LHCP and RHCP Substrate Integrated Waveguide Antenna Arrays for Millimeter-Wave Applications
Left-hand and right-hand circularly polarized (LHCP and RHCP) substrate integrated waveguide (SIW) antenna arrays are presented at the 28-GHz band for millimeter-wave (mm-wave) applications. Two types of circularly polarized (CP) antenna elements are designed to achieve respective LHCP and RHCP performance. Eight-element LHCP and RHCP antenna arrays have been implemented with feeding networks and measured. Based on the measurement, the LHCP and RHCP antenna arrays have impedance bandwidths of 1.54 and 1.7 GHz within $| S_{\rm{11}}| < - \text{10 dB}$, whereas 3-dB axial-ratio bandwidths are 1.1 and 1.3 GHz, respectively. Each of the fabricated LHCP and RHCP antenna arrays is accomplished with a gain up to 13.09 and 13.52 dBi. Most of the measured results are validated with the simulated ones. The proposed CP antenna arrays can provide low-cost, broadband characteristics, and high-gain radiation performance with CP properties for mm-wave applications.
Effective Background Model-Based RGB-D Dense Visual Odometry in a Dynamic Environment
This paper proposes a robust background model-based dense-visual-odometry (BaMVO) algorithm that uses an RGB-D sensor in a dynamic environment. The proposed algorithm estimates the background model represented by the nonparametric model from depth scenes and then estimates the ego-motion of the sensor using the energy-based dense-visual-odometry approach based on the estimated background model in order to consider moving objects. Experimental results demonstrate that the ego-motion is robustly obtained by BaMVO in a dynamic environment.
Physical functioning, pain and quality of life after amputation for musculoskeletal tumours: a national survey.
Patients who have limb amputation for musculoskeletal tumours are a rare group of cancer survivors. This was a prospective cross-sectional survey of patients from five specialist centres for sarcoma surgery in England. Physical function, pain and quality of life (QOL) outcomes were collected after lower extremity amputation for bone or soft-tissue tumours to evaluate the survivorship experience and inform service provision. Of 250 patients, 105 (42%) responded between September 2012 and June 2013. From these, completed questionnaires were received from 100 patients with a mean age of 53.6 years (19 to 91). In total 60 (62%) were male and 37 (38%) were female (three not specified). The diagnosis was primary bone sarcoma in 63 and soft-tissue tumour in 37. A total of 20 tumours were located in the hip or pelvis, 31 above the knee, 32 between the knee and ankle and 17 in the ankle or foot. In total 22 had hemipelvectomy, nine hip disarticulation, 35 transfemoral amputation, one knee disarticulation, 30 transtibial amputation, two toe amputations and one rotationplasty. The Toronto Extremity Salvage Score (TESS) differed by amputation level, with poorer scores at higher levels (p < 0.001). Many reported significant pain. In addition, TESS was negatively associated with increasing age, and pain interference scores. QOL for Cancer Survivors was significantly correlated with TESS (p < 0.001). This relationship appeared driven by pain interference scores. This unprecedented national survey confirms amputation level is linked to physical function, but not QOL or pain measures. Pain and physical function significantly impact on QOL. These results are helpful in managing the expectations of patients about treatment and addressing their complex needs.
Programming algorithms for multilevel phase-change memory
Phase-change memory (PCM) has emerged as one among the most promising technologies for next-generation nonvolatile solid-state memory. Multilevel storage, namely storage of non-binary information in a memory cell, is a key factor for reducing the total cost-per-bit and thus increasing the competitiveness of PCM technology in the nonvolatile memory market. In this paper, we present a family of advanced programming schemes for multilevel storage in PCM. The proposed schemes are based on iterative write-and-verify algorithms that exploit the unique programming characteristics of PCM in order to achieve significant improvements in resistance-level packing density, robustness to cell variability, programming latency, energy-per-bit and cell storage capacity. Experimental results from PCM test-arrays are presented to validate the proposed programming schemes. In addition, the reliability issues of multilevel PCM in terms of resistance drift and read noise are discussed.
New Parsimonious Multivariate Spatial Model : Spatial Envelope
Dimension reduction provides a useful tool for analyzing high dimensional data. The recently developed Envelope method is a parsimonious version of the classical multivariate regression model through identifying a minimal reducing subspace of the responses. However, existing envelope methods assume an independent error structure in the model. While the assumption of independence is convenient, it does not address the additional complications associated with spatial or temporal correlations in the data. In this article, we introduce a Spatial Envelope method for dimension reduction in the presence of dependencies across space. We study the asymptotic properties of the proposed estimators and show that the asymptotic variance of the estimated regression coefficients under the spatial envelope model is smaller than that from the traditional maximum likelihood estimation. Furthermore, we present a computationally efficient approach for inference. The efficacy of the new approach is investigated through simulation studies and an analysis of an Air Quality Standard (AQS) dataset from the Environmental Protection Agency (EPA). Keyword: Dimension reduction, Grassmanian manifold, Matern covariance function, Spatial dependency.
Discriminative probabilistic models for passage based retrieval
The approach of using passage-level evidence for document retrieval has shown mixed results when it is applied to a variety of test beds with different characteristics. One main reason of the inconsistent performance is that there exists no unified framework to model the evidence of individual passages within a document. This paper proposes two probabilistic models to formally model the evidence of a set of top ranked passages in a document. The first probabilistic model follows the retrieval criterion that a document is relevant if any passage in the document is relevant, and models each passage independently. The second probabilistic model goes a step further and incorporates the similarity correlations among the passages. Both models are trained in a discriminative manner. Furthermore, we present a combination approach to combine the ranked lists of document retrieval and passage-based retrieval. An extensive set of experiments have been conducted on four different TREC test beds to show the effectiveness of the proposed discriminative probabilistic models for passage-based retrieval. The proposed algorithms are compared with a state-of-the-art document retrieval algorithm and a language model approach for passage-based retrieval. Furthermore, our combined approach has been shown to provide better results than both document retrieval and passage-based retrieval approaches.
Transducers and the Decidability of Independence in Free Monoids
Abstract A transducer-based general method is derived by which proofs of the decidability of independence of regular languages with respect to a given relation can be obtained. This permits a uniform proof of the decidability of certain code properties which so far has only been obtained using separate, quite different and ad hoc proofs.
Increasing fruit and vegetable intake by changing environments, policy and pricing: restaurant-based research, strategies, and recommendations.
BACKGROUND Restaurants are among the most important and promising venues for environmental, policy, and pricing initiatives to increase fruit and vegetable (F&V) intake. This article reviews restaurant-based environmental, policy and pricing strategies for increasing intake of fruits and vegetables and identifies promising strategies, research needs, and innovative opportunities for the future. METHODS The strategies, examples, and research reported here were identified through an extensive search of published journal articles, government documents, the internet, and inquiries to leaders in the field. Recommendations were expanded based on discussion by participants in the CDC/ACS-sponsored Fruit and Vegetable, Environment Policy and Pricing Workshop held in September of 2002. RESULTS Six separate types of restaurant-based interventions were identified: increased availability, increased access, reduced prices and coupons, catering policies, point-of-purchase (POP) information, and promotion and communication. Combination approaches have also been implemented. Evaluation data on these interventions show some significant impact on healthful diets, particularly with point-of-purchase information. However, most published reports emphasize low-fat eating, and there is a need to translate and evaluate interventions focused on increasing fruit and vegetable intake. CONCLUSIONS Several models for changing environments, policy and pricing to increase fruit and vegetable availability, access, attractiveness and consumption in restaurants have been tested and found to have some promise. There is a need to evaluate fruit and vegetable-specific strategies; to obtain data from industry; to disseminate promising programs; and to enhance public-private partnerships and collaboration to expand on current knowledge.
Bifurcations of Recurrent Neural Networks in Gradient Descent Learning
Asymptotic behavior of a recurrent neural network changes qualitatively at certain points in the parameter space, which are known as \bifurcation points". At bifurcation points, the output of a network can change discontinuously with the change of parameters and therefore convergence of gradient descent algorithms is not guaranteed. Furthermore, learning equations used for error gradient estimation can be unstable. However, some kinds of bifurcations are inevitable in training a recurrent network as an automaton or an oscillator. Some of the factors underlying successful training of recurrent networks are investigated, such as choice of initial connections, choice of input patterns, teacher forcing, and truncated learning equations.
Smartphone addiction and its relationship with social anxiety and loneliness
Individuals with psychosocial problems such as social phobia or feelings of loneliness might be vulnerable to excessive use of cyber-technological devices, such as smartphones. We aimed to determine the relationship of smartphone addiction with social phobia and loneliness in a sample of university students in Istanbul, Turkey. Three hundred and sixty-seven students who owned smartphones were given the Smartphone Addiction Scale (SAS), UCLA Loneliness Scale (UCLA-LS), and Brief Social Phobia Scale (BSPS). A significant difference was found in the mean SAS scores (p < .001) between users who declared that their main purpose for smartphone use was to access social networking sites. The BSPS scores showed positive correlations with all six subscales and with the total SAS scores. The total UCLA-LS scores were positively correlated with daily life disturbance, positive anticipation, cyber-oriented relationship, and total scores on the SAS. In regression analyses, total BSPS scores were significant predictors for SAS total scores (β = 0.313, t = 5.992, p < .001). In addition, BSPS scores were significant predictors for all six SAS subscales, whereas UCLA-LS scores were significant predictors for only cyber-oriented relationship subscale scores on the SAS (β = 0.130, t = 2.416, p < .05). The results of this study indicate that social phobia was associated with the risk for smartphone addiction in young people. Younger individuals who primarily use their smartphones to access social networking sites also have an excessive pattern of smartphone use. ARTICLE HISTORY Received 12 January 2016 Accepted 19 February 2016
Algorithms for planar graphs
Algorithms for planar graphs
NOYB: privacy in online social networks
Increasingly, Internet users trade privacy for service. Facebook, Google, and others mine personal information to target advertising. This paper presents a preliminary and partial answer to the general question "Can users retain their privacy while still benefiting from these web services?". We propose NOYB, a novel approach that provides privacy while preserving some of the functionality provided by online services. We apply our approach to the Facebook online social networking website. Through a proof-of-concept implementation we demonstrate that NOYB is practical and incrementally deployable, requires no changes to or cooperation from an existing online service, and indeed can be non-trivial for the online service to detect.
The preconception diet is associated with the chance of ongoing pregnancy in women undergoing IVF/ICSI treatment.
BACKGROUND Subfertility and poor nutrition are increasing problems in Western countries. Moreover, nutrition affects fertility in both women and men. In this study, we investigate the association between adherence to general dietary recommendations in couples undergoing IVF/ICSI treatment and the chance of ongoing pregnancy. METHODS Between October 2007 and October 2010, couples planning pregnancy visiting the outpatient clinic of the Department of Obstetrics and Gynaecology of the Erasmus Medical Centre in Rotterdam, the Netherlands were offered preconception counselling. Self-administered questionnaires on general characteristics and diet were completed and checked during the visit. Six questions, based on dietary recommendations of the Netherlands Nutrition Centre, covered the intake of six main food groups (fruits, vegetables, meat, fish, whole wheat products and fats). Using the questionnaire results, we calculated the Preconception Dietary Risk score (PDR), providing an estimate of nutritional habits. Dietary quality increases with an increasing PDR score. We define ongoing pregnancy as an intrauterine pregnancy with positive heart action confirmed by ultrasound. For this analysis we selected all couples (n=199) who underwent a first IVF/ICSI treatment within 6 months after preconception counselling. We applied adjusted logistic regression analysis on the outcomes of interest using SPSS. RESULTS After adjustment for age of the woman, smoking of the woman, PDR of the partner, BMI of the couple and treatment indication we show an association between the PDR of the woman and the chance of ongoing pregnancy after IVF/ICSI treatment (odds ratio 1.65, confidence interval: 1.08-2.52; P=0.02]. Thus, a one-point increase in the PDR score associates with a 65% increased chance of ongoing pregnancy. CONCLUSIONS Our results show that increasing adherence to Dutch dietary recommendations in women undergoing IVF/ICSI treatment increases the chance of ongoing pregnancy. These data warrant further confirmation in couples achieving a spontaneous pregnancy and in randomized controlled trials.
A Supervised Approach to Extractive Summarisation of Scientific Papers
Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods.
Multimodal integration after unilateral labyrinthine lesion: single vestibular nuclei neuron responses and implications for postural compensation.
Plasticity in neuronal responses is necessary for compensation following brain lesions and adaptation to new conditions and motor learning. In a previous study, we showed that compensatory changes in the vestibuloocular reflex (VOR) following unilateral vestibular loss were characterized by dynamic reweighting of inputs from vestibular and extravestibular modalities at the level of single neurons that constitute the first central stage of VOR signal processing. Here, we studied another class of neurons, i.e., the vestibular-only neurons, in the vestibular nuclei that mediate vestibulospinal reflexes and provide information for higher brain areas. We investigated changes in the relative contribution of vestibular, neck proprioceptive, and efference copy signals in the response of these neurons during compensation after contralateral vestibular loss in Macaca mulata monkeys. We show that the time course of recovery of vestibular sensitivity of neurons corresponds with that of lower extremity muscle and tendon reflexes reported in previous studies. More important, we found that information from neck proprioceptors, which did not influence neuronal responses before the lesion, were unmasked after lesion. Such inputs influenced the early stages of the compensation process evidenced by faster and more substantial recovery of the resting discharge in proprioceptive-sensitive neurons. Interestingly, unlike our previous study of VOR interneurons, the improvement in the sensitivity of the two groups of neurons did not show any difference in the early or late stages after lesion. Finally, neuronal responses during active head movements were not different before and after lesion and were attenuated relative to passive movements over the course of recovery, similar to that observed in control conditions. Comparison of compensatory changes observed in the vestibuloocular and vestibulospinal pathways provides evidence for similarities and differences between the two classes of neurons that mediate these pathways at the functional and cellular levels.
Super-resolution of compressed videos using convolutional neural networks
Convolutional neural networks (CNN) have been successfully applied to image super-resolution (SR) as well as other image restoration tasks. In this paper, we consider the problem of compressed video super-resolution. Traditional SR algorithms for compressed videos rely on information from the encoder such as frame type or quantizer step, whereas our algorithm only requires the compressed low resolution frames to reconstruct the high resolution video. We propose a CNN that is trained on both the spatial and the temporal dimensions of compressed videos to enhance their spatial resolution. Consecutive frames are motion compensated and used as input to a CNN that provides super-resolved video frames as output. Our network is pretrained with images, which significantly improves the performance over random initialization. In extensive experimental evaluations, we trained the state-of-the-art image and video superresolution algorithms on compressed videos and compared their performance to our proposed method.
Classification and Comparison of Range-Based Localization Techniques in Wireless Sensor Networks
—For the majority of the applications of WSNs, it is desirable to know the location of sensors. In WSNs, for obtaining this kind of information we need localization schemes. Localization techniques are used to determine the geographical position of sensor nodes. The position parameters of sensor nodes are also useful for many operations for network management, such as routing process, topology control, and security maintenance. So, it is very important that every node should reports its location information very accurately. As we know that GPS is very accurate in location determination but it is expensive in terms of cost and energy of nodes, so it is not useful in WSNs. There are so many localization techniques, including range-based and range-free, have been proposed to calculate positions for randomly deployed sensor nodes. Accuracy of localization techniques is most important before implementing it. With specific hardware, the range-based schemes typically achieve high accuracy based on either nodeto-node distances or angles. In this paper, our main focus is on the range-based localization. Precisely, in order to helps the network designers to find out which techniques/algorithms are suitable for their applications, the most popular range-based localization techniques are classified and compared.
An Improved SDA Based Defect Prediction Framework for Both Within-Project and Cross-Project Class-Imbalance Problems
Background. Solving the class-imbalance problem of within-project software defect prediction (SDP) is an important research topic. Although some class-imbalance learning methods have been presented, there exists room for improvement. For cross-project SDP, we found that the class-imbalanced source usually leads to misclassification of defective instances. However, only one work has paid attention to this cross-project class-imbalance problem. Objective. We aim to provide effective solutions for both within-project and cross-project class-imbalance problems. Method. Subclass discriminant analysis (SDA), an effective feature learning method, is introduced to solve the problems. It can learn features with more powerful classification ability from original metrics. For within-project prediction, we improve SDA for achieving balanced subclasses and propose the improved SDA (ISDA) approach. For cross-project prediction, we employ the semi-supervised transfer component analysis (SSTCA) method to make the distributions of source and target data consistent, and propose the SSTCA+ISDA prediction approach. Results. Extensive experiments on four widely used datasets indicate that: 1) ISDA-based solution performs better than other state-of-the-art methods for within-project class-imbalance problem; 2) SSTCA+ISDA proposed for cross-project class-imbalance problem significantly outperforms related methods. Conclusion. Within-project and cross-project class-imbalance problems greatly affect prediction performance, and we provide a unified and effective prediction framework for both problems.
Self-Efficacy for Reading and Writing : Influence of Modeling , Goal Setting , and Self-Evaluation
Perceived self-efficacy, or students’ personal beliefs about their capabilities to learn or perform behaviors at designated levels, plays an important role in their motivation and learning. Self-efficacy is a key mechanism in social cognitive theory, which postulates that achievement depends on interactions between behaviors, personal factors, and environmental conditions. Self-efficacy affects choice of tasks, effort, persistence, and achievement. Sources of self-efficacy information include personal accomplishments, vicarious experiences, social persuasion, and physiological indicators. At the outset of learning activities, students have goals and a sense of self-efficacy for attaining them. Self-evaluations of learning progress sustain self-efficacy and motivation. Research on academic learning is summarized, showing how modeling, goal setting, and selfevaluation affect self-efficacy, motivation, and learning. Suggestions for applying these ideas to teaching are provided. Article: Researchers and practitioners interested in student motivation and learning in academic settings are focusing increasingly on the role of students’ thoughts and beliefs during learning. This focus contrasts with prior views stressing students’ pre-existing skills and abilities. Although these factors are important, by themselves they are insufficient to explain the variations in motivation and learning among students with comparable skills and abilities. In this article I discuss theory, research, and applications relevant to one type of personal belief: perceived selfefficacy. Self-efficacy refers to beliefs about one’s capabilities to learn or perform behaviors at designated levels (Bandura, 1986, 1997). Research shows that self-efficacy predicts students’ academic motivation and learning (Pajares, 1996; Schunk, 1995, 1996). Within this context, I present research evidence showing how social models, goal setting, and self-evaluation affect self-efficacy, motivation, and learning. Modeling refers to patterning one’s thoughts, beliefs, actions, strategies, and behaviors after those displayed by one or more models. A goal, or what one is consciously trying to accomplish, provides a standard against which people can gauge their progress (Schunk, 1990). Selfevaluation comprises (a) self judgments of present performance through comparisons with one’s goal and (b) self-reactions to those judgments by deeming performance noteworthy, unacceptable, and so forth (Schunk, 1996). Research has demonstrated the effects of these processes on students’ academic achievement in various domains (Schunk & Zimmerman, 1998). The article concludes with implications of the theory and research for educational practice. THEORETICAL BACKGROUND Social Cognitive Theory Self-efficacy is part of a larger theoretical framework known as social cognitive theory, which postulates that human achievement depends on interactions between one’s behaviors, personal factors (e.g., thoughts and beliefs), and environmental conditions (Bandura, 1986, 1997). With respect to the link between personal factors and behaviors, much research shows that students’ self-efficacy beliefs influence such achievement behaviors as choice of tasks, effort, persistence, and achievement (Schunk, 1995). Conversely, students’ behaviors can alter efficacy beliefs. As students work on tasks, they note their progress toward their goals. Goal progress and accomplishment convey to students that they are capable of performing well, which enhances self-efficacy for continued learning. Students’ behaviors and classroom environments also are related. Consider a teacher who directs students’ attention by stating, “Look at this.” Environmental influence on behavior occurs when students direct their attention without conscious deliberation. Students’ behaviors also can alter their environments. When students answer questions incorrectly, the teacher may reteach the lesson differently rather than continue with the original material. Personal and environmental factors affect one another. As an example of how beliefs can affect the environment, consider students with high and low self-efficacy for learning. Those with high efficacy may view the task as a challenge and work diligently to master it, thereby creating a productive classroom environment. Those with low efficacy may attempt to avoid the task, which can disrupt the classroom. The influence of environment on thought is evident when teachers give students feedback (e.g., “That’s right, you really are good at this.”), which raises self-efficacy and sustains motivation for learning.
Chapter 2 Accelerating Lloyd ’ s Algorithm for k-Means Clustering
The k-means clustering algorithm, a staple of data mining and unsupervised learning, is popular because it is simple to implement, fast, easily parallelized, and offers intuitive results. Lloyd’s algorithm is the standard batch, hill-climbing approach for minimizing the k-means optimization criterion. It spends a vast majority of its time computing distances between each of the k cluster centers and the n data points. It turns out that much of this work is unnecessary, because points usually stay in the same clusters after the first few iterations. In the last decade researchers have developed a number of optimizations to speed up Lloyd’s algorithm for both lowand high-dimensional data. In this chapter we survey some of these optimizations and present new ones. In particular we focus on those which avoid distance calculations by the triangle inequality. By caching known distances and updating them efficiently with the triangle inequality, these algorithms can provably avoid many unnecessary distance calculations. All the optimizations examined produce the same results as Lloyd’s algorithm given the same input and initialization, so are suitable as drop-in replacements. These new algorithms can run many times faster and compute far fewer distances than the standard unoptimized implementation. In our experiments, it is common to see speedups of over 30–50x compared to Lloyd’s algorithm. We examine the trade-offs for using these methods with respect to the number of examples n, dimensions d , clusters k, and structure of the data.
Adaptive game AI with dynamic scripting
Online learning in commercial computer games allows computer-controlled opponents to adapt to the way the game is being played. As such it provides a mechanism to deal with weaknesses in the game AI, and to respond to changes in human player tactics. We argue that online learning of game AI should meet four computational and four functional requirements. The computational requirements are speed, effectiveness, robustness and efficiency. The functional requirements are clarity, variety, consistency and scalability. This paper investigates a novel online learning technique for game AI called ‘dynamic scripting’, that uses an adaptive rulebase for the generation of game AI on the fly. The performance of dynamic scripting is evaluated in experiments in which adaptive agents are pitted against a collection of manually-designed tactics in a simulated computer roleplaying game. Experimental results indicate that dynamic scripting succeeds in endowing computer-controlled opponents with adaptive performance. To further improve the dynamic-scripting technique, an enhancement is investigated that allows scaling of the difficulty level of the game AI to the human player’s skill level. With the enhancement, dynamic scripting meets all computational and functional requirements. The applicability of dynamic scripting in state-of-the-art commercial games is demonstrated by implementing the technique in the game Neverwinter Nights. We conclude that dynamic scripting can be successfully applied to the online adaptation of game AI in commercial computer games.
Multiparametric Cardiac Magnetic Resonance Survey in Children With Thalassemia Major: A Multicenter Study.
BACKGROUND Cardiovascular magnetic resonance (CMR) plays a key role in the management of thalassemia major patients, but few data are available in pediatric population. This study aims at a retrospective multiparametric CMR assessment of myocardial iron overload, function, and fibrosis in a cohort of pediatric thalassemia major patients. METHODS AND RESULTS We studied 107 pediatric thalassemia major patients (61 boys, median age 14.4 years). Myocardial and liver iron overload were measured by T2* multiecho technique. Atrial dimensions and biventricular function were quantified by cine images. Late gadolinium enhancement images were acquired to detect myocardial fibrosis. All scans were performed without sedation. The 21.4% of the patients showed a significant myocardial iron overload correlated with lower compliance to chelation therapy (P<0.013). Serum ferritin ≥2000 ng/mL and liver iron concentration ≥14 mg/g/dw were detected as the best threshold for predicting cardiac iron overload (P=0.001 and P<0.0001, respectively). A homogeneous pattern of myocardial iron overload was associated with a negative cardiac remodeling and significant higher liver iron concentration (P<0.0001). Myocardial fibrosis by late gadolinium enhancement was detected in 15.8% of the patients (youngest children 13 years old). It was correlated with significant lower heart T2* values (P=0.022) and negative cardiac remodeling indexes. A pathological magnetic resonance imaging liver iron concentration was found in the 77.6% of the patients. CONCLUSIONS Cardiac damage detectable by a multiparametric CMR approach can occur early in thalassemia major patients. So, the first T2* CMR assessment should be performed as early as feasible without sedation to tailor the chelation treatment. Conversely, late gadolinium enhancement CMR should be postponed in the teenager age.
Tackling the digitalization challenge : how to benefit from digitalization in practice
Digitalization has been identified as one of the major trends changing society and business. Digitalization causes changes for companies due to the adoption of digital technologies in the organization or in the operation environment. This paper discusses digitalization from the viewpoint of diverse case studies carried out to collect data from several companies, and a literature study to complement the data. This paper describes the first version of the digital transformation model, derived from synthesis of these industrial cases, explaining a starting point for a systematic approach to tackle digital transformation. The model is aimed to help companies systematically handle the changes associated with digitalization. The model consists of four main steps, starting with positioning the company in digitalization and defining goals for the company, and then analyzing the company’s current state with respect to digitalization goals. Next, a roadmap for reaching the goals is defined and implemented in the company. These steps are iterative and can be repeated several times. Although company situations vary, these steps will help to systematically approach digitalization and to take the steps necessary to benefit from it.
Strip-type Grid Array Antenna with a Two-Layer Rear-Space Structure
A strip-type grid array antenna is analyzed using the finite-difference time-domain method. The space between the grid array and the ground plane, designated as the rear space, is filled with a dielectric layer and an air layer (two-layer rear-space structure). A VSWR bandwidth of approximately 13% is obtained with a maximum directivity of approximately 18 dB. The grid array exhibits a narrow beam with low side lobes. The cross-polarization component is small (-25 dB), as desired.
The discovery of a source of adult hematopoietic cells in the embryo.
This essay is about the 1975 JEEM paper by Françoise Dieterlen-Lièvre (Dieterlen-Lièvre, 1975) and the studies that followed it, which indicated that the adult hematopoietic system in the avian embryo originates, not from the blood islands of the extraembryonic yolk sac as was then believed, but from the body of the embryo itself. Dieterlen-Lièvre's 1975 paper created a paradigm shift in hematopoietic research, and provided a new and lasting focus on hematopoietic activity within the embryo body.
Disparity of end-of-life care in cancer patients with and without schizophrenia: A nationwide population-based cohort study
BACKGROUND Cancer patients with schizophrenia may face disparities in end-of life care, and it is unclear whether schizophrenia affects their medical care and treatment. METHODS We conducted a nationwide population-based cohort study based on the National Health Insurance Research Database of Taiwan. The study population included patients >20years old who were newly diagnosed as having one of six common cancers between 2000 and 2012 (schizophrenia cohort: 1911 patients with both cancer and schizophrenia; non-schizophrenia cohort: 7644 cancer patients without schizophrenia). We used a multiple logistic regression model to analyze the differences in medical treatment between the two cohorts in the final 1 and 3months of life. RESULTS In the 1month before death, there was higher intensive care unit utilization in the schizophrenia group [odd ratio (OR)=1.21, 95% confidence interval (CI)=1.07-1.36] and no significant differences between the groups in-hospital stay length or hospice care. The schizophrenia patients received less chemotherapy (OR=0.60, 95% CI=0.55-0.66) but more invasive interventions, such as cardiopulmonary resuscitation (OR=1.34, 95% CI=1.15-1.57). Advanced diagnostic examinations, such as computed tomography/magnetic resonance imaging/sonography (OR=0.80, 95% CI=0.71-0.89), were used less often for the schizophrenia patients. The 1- and 3-month prior to death results were similar. CONCLUSION End-of-life cancer patients with schizophrenia underwent more frequent invasive treatments but less chemotherapy and examinations. Treatment plans/advance directives should be discussed with patients/families early to enhance end-of-life care quality and reduce health care disparities caused by schizophrenia.
Health-related quality of life, psychosocial strains, and coping in parents of children with chronic renal failure
Health-related quality of life (HRQOL) in parents of children suffering from renal disease is often diminished by the illness burden experienced in daily life and by unfavorable ways of coping. Our aim was to examine the relationship between psychosocial strains perceived by parents, their ways of coping, and HRQOL. In an anonymous cross-sectional study, parents completed a questionnaire concerning psychosocial strains, coping strategies, and HRQOL, as well as sociodemographic and illness parameters. Study participants were recruited in two outpatient dialysis centers. Participating in the study were 195 parents (105 mothers, 90 fathers; age 43 ± 8 years; representing 108 families) of children suffering from renal disease (age 12 ± 5 years). Parents of children with chronic renal failure reported moderate HRQOL with parents of children undergoing dialysis experiencing more limitations in quality of life than parents of children living with a kidney graft and parents of children undergoing conservative treatment. Mothers experienced lower HRQOL and higher psychosocial strains than fathers. HRQOL was predicted by the coping strategies “focusing on child” (β = –0.25), “improving marital relationship” (β = 0.24), “seeking social support” (β = –0.22) and “self-acceptation and growth” (β =0 .19) as well as parents′ perceived limitation by illness in daily life (β = –0.15; explained variance 57%). In the comprehensive care for families with a child suffering from a renal disease, screening for psychosocial strains and ways of coping, along with applying interventions to strengthen adaptive coping strategies, may be a preventative means of improving parents′ quality of life.
Ruminating Reader: Reasoning with Gated Multi-Hop Attention
To answer the question in machine comprehension (MC) task, the models need to establish the interaction between the question and the context. To tackle the problem that the single-pass model cannot reflect on and correct its answer, we present Ruminating Reader. Ruminating Reader adds a second pass of attention and a novel information fusion component to the Bi-Directional Attention Flow model (BIDAF). We propose novel layer structures that construct a query aware context vector representation and fuse encoding representation with intermediate representation on top of BIDAF model. We show that a multi-hop attention mechanism can be applied to a bi-directional attention structure. In experiments on SQuAD, we find that the Reader outperforms the BIDAF baseline by 2.1 F1 score and 2.7 EM score. Our analysis shows that different hops of the attention have different responsibilities in selecting answers.
Bandwidth Enhancement of a Single-Feed Circularly Polarized Antenna Using a Metasurface: Metamaterial-based wideband CP rectangular microstrip antenna.
In this article, the authors use a 7 x 7 rectangular-ring unit metasurface and a single-feed, CP, rectangular, slotted patch antenna to enhance bandwidth. The antenna consists of a rectangular, slotted patch radiator, a metasurface with an array of rectangular-ring cells, and a coaxial feed. Using the metasurface results in a wide CP bandwidth sevenfold the size of a conventional antenna. The measured 3-dB-AR bandwidth of an antenna prototype printed onto an FR4 substrate is 28.3% (3.62 GHz-4.75 GHz) with 2:1-voltage standing wave ratio (VSWR) bandwidth of 36%. A 6.0-dBic, boresight gain with a variation of 1.5 dB is achieved across the frequency band of 3.35 GHz-4.75 GHz, with the maximum gain of 7.4 dBic at 4.1 GHz. We measure and compare the antenna prototype with a simulation using CST Microwave Studio. Parametric studies aid in understanding the proposed antenna's operation mechanism.
Mental health problems and help-seeking behavior among college students.
Mental disorders are as prevalent among college students as same-aged non-students, and these disorders appear to be increasing in number and severity. The purpose of this report is to review the research literature on college student mental health, while also drawing comparisons to the parallel literature on the broader adolescent and young adult populations.
Measuring the Intrinsic Dimension of Objective Landscapes
Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. We slowly increase the dimension of this subspace, note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape. The approach is simple to implement, computationally tractable, and produces several suggestive conclusions. Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes. This latter result has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold. Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10. In addition to providing new cartography of the objective landscapes wandered by parameterized models, the method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution. A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times.
Binomial filters
Binomial filters are simple and efficient structures based on the binomial coefficients for implementing Gaussian filtering. They do not require multipliers and can therefore be implemented efficiently in programmable hardware. There are many possible variations of the basic binomial filter structure, and they provide a wide range of space-time trade-offs; a number of these designs have been captured in a parametrised form and their features are compared. This technique can be used for multi-dimensional filtering, provided that the filter is separable. The numerical performance of binomial filters, and their implementation using field-programmable devices for an image processing application, are also discussed.
A New Intelligence-Based Approach for Computer-Aided Diagnosis of Dengue Fever
Identification of the influential clinical symptoms and laboratory features that help in the diagnosis of dengue fever (DF) in early phase of the illness would aid in designing effective public health management and virological surveillance strategies. Keeping this as our main objective, we develop in this paper a new computational intelligence-based methodology that predicts the diagnosis in real time, minimizing the number of false positives and false negatives. Our methodology consists of three major components: 1) a novel missing value imputation procedure that can be applied on any dataset consisting of categorical (nominal) and/or numeric (real or integer); 2) a wrapper-based feature selection method with genetic search for extracting a subset of most influential symptoms that can diagnose the illness; and 3) an alternating decision tree method that employs boosting for generating highly accurate decision rules. The predictive models developed using our methodology are found to be more accurate than the state-of-the-art methodologies used in the diagnosis of the DF.
Privacy Impacts of Data Encryption on the Efficiency of Digital Forensics Technology
Owing to a number of reasons, the deployment of encryption solutions are beginning to be ubiquitous at both organizational and individual levels. The most emphasized reason is the necessity to ensure confidentiality of privileged information. Unfortunately, it is also popular as cyber-criminals' escape route from the grasp of digital forensic investigations. The direct encryption of data or indirect encryption of storage devices, more often than not, prevents access to such information contained therein. This consequently leaves the forensics investigation team, and subsequently the prosecution, little or no evidence to work with, in sixty percent of such cases. However, it is unthinkable to jeopardize the successes brought by encryption technology to information security, in favour of digital forensics technology. This paper examines what data encryption contributes to information security, and then highlights its contributions to digital forensics of disk drives. The paper also discusses the available ways and tools, in digital forensics, to get around the problems constituted by encryption. A particular attention is paid to the Truecrypt encryption solution to illustrate ideas being discussed. It then compares encryption's contributions in both realms, to justify the need for introduction of new technologies to forensically defeat data encryption as the only solution, whilst maintaining the privacy goal of users. Keywords—Encryption; Information Security; Digital Forensics; Anti-Forensics; Cryptography; TrueCrypt
Two-Dimensional Canonical Correlation Analysis
In this letter, we present a method of two-dimensional canonical correlation analysis (2D-CCA) where we extend the standard CCA in such a way that relations between two different sets of image data are directly sought without reshaping images into vectors. We stress that 2D-CCA dramatically reduces the computational complexity, compared to the standard CCA. We show the useful behavior of 2D-CCA through numerical examples of correspondence learning between face images in different poses and illumination conditions.
Illusion and Dazzle: Adversarial Optical Channel Exploits against Lidars for Automotive Applications
With the advancement in computing, sensing, and vehicle electronics, autonomous vehicles are being realized. For autonomous driving, environment perception sensors such as radars, lidars, and vision sensors play core roles as the eyes of a vehicle; therefore, their reliability cannot be compromised. In this work, we present a spoofing by relaying attack, which can not only induce illusions in the lidar output but can also cause the illusions to appear closer than the location of a spoofing device. In a recent work, the former attack is shown to be effective, but the latter one was never shown. Additionally, we present a novel saturation attack against lidars, which can completely incapacitate a lidar from sensing a certain direction. The effectiveness of both the approaches is experimentally verified against Velodyne’s VLP-16.
Migration of central lines from the superior vena cava to the azygous vein.
AIM To report 11 cases of central venous access catheters migrating from the superior vena cava to the azygos vein in order to raise radiologists' awareness of this possibility. MATERIALS AND METHODS This is a retrospective review of the clinical history and imaging of 11 patients whose central line migrated from the superior vena cava to the azygos vein. The time course of migration, access route of the catheters, outcome, and depth of placement in the superior vena cava were evaluated. RESULTS All of these catheters were placed from the left; six through the subclavian vein, four as PICC lines, and one from the left internal jugular vein. Seven of the catheters were originally positioned in the superior vena cava. Four of the catheters were originally positioned in the azygos vein and were repositioned into the superior vena cava at the time of placement. The time to migration ranged from 2 to 126 days, average 43 days. In three cases, the migration was not reported at the first opportunity, resulting in a delay in diagnosis ranging from 10 to 27 days. All but one of the catheters extended at least 3.5 cm (range 1.8-7 cm) below the top of the right mainstem bronchus when in the superior vena cava. CONCLUSION Risk factors for migration into the azygos vein include placement from a left-sided approach and original positioning in the azygos vein with correction at placement. The depth of placement in the superior vena cava was not a protective factor. It is important to recognize migration because of the elevated risk of complications when central lines are placed in the azygos vein.
Design and Validation of a Synchronous Reluctance Motor With Single Tooth Windings
This paper presents for the first time the analysis and experimental validation of a six-slot four-pole synchronous reluctance motor with nonoverlapping fractional slot-concentrated windings. The machine exhibits high torque density and efficiency due to its high fill factor coils with very short end windings, facilitated by a segmented stator and bobbin winding of the coils. These advantages are coupled with its inherent robustness and low cost. The topology is presented as a logical step forward in advancing synchronous reluctance machines that have been universally wound with a sinusoidally distributed winding. The paper presents the motor design, performance evaluation through finite element studies and validation of the electromagnetic model, and thermal specification through empirical testing. It is shown that high performance synchronous reluctance motors can be constructed with single tooth wound coils, but considerations must be given regarding torque quality and the d-q axis inductances.
Ground coplanar waveguide to rectangular waveguide transition
The ever-increasing interest towards microwave and millimeter-wave monolithic integrated circuits has led to employing coplanar waveguide (CPW) due to its many advantages such as compact size and capability to incorporate shunt and series elements without the need to process backside and integrate via holes. On the other hand, rectangular waveguides are widely used at higher frequencies for their low loss characteristics in applications such as high Q filters, resonators and antenna feed networks. Therefore at millimeter- and submillimeter-wave applications in which active and passive components are integrated, often waveguide structures are to be combined with CPW lines. For those applications, low loss transitions are required.
The dynamics of absorption coefficients of CDOM and particles in the St. Lawrence estuarine system: Biogeochemical and physical implications
a r t i c l e i n f o Keywords: CDOM Particles Optical properties Absorption Molecular weight Water mass tracer Estuarine mixing Estuary Fjord Absorption spectra of chromophoric dissolved organic matter (CDOM) and particles were obtained in May 2007 in the St. Lawrence estuary (SLE, Canada), the northwestern Gulf of St. Lawrence (NWG), and the Saguenay Fjord (CDOM only), the main tributary of the SLE. CDOM absorption generally decreased downstream and with depth and showed an inverse relationship to tidal cycles. Phytoplankton absorption in surface water of the SLE increased seaward while non-algal particle absorption trended oppositely; both variables declined with depth. Surface water CDOM absorption surpassed particle absorption in the SLE while phytoplankton absorption dominated in the NWG. Elevated non-algal and CDOM absorption were found in the turbidity maximum zone near the head of the SLE. Enriched CDOM absorption also occurred in the bottom water of the lower SLE and NWG. The spectral slope ratio of CDOM absorption, defined as the ratio of the spectral slope between 275 and 295 nm to that between 350 and 400 nm, was confirmed to be an indicator of the source and molecular weight of CDOM. This surrogate functionality, however, failed for absorption spectra exhibiting shoulders in short ultraviolet wavelengths observed in deep waters of the SLE and NWG. CDOM absorption mainly displayed conservative mixing behavior in both the SLE and the Sag-uenay Fjord. CDOM was employed to trace the source identity of the Fjord's deepwater. It was found that the marine end member of the Fjord's deepwater possessed a salinity of 32.92 and a temperature of ca. 1 °C and originated from the intermediate cold layer of the lower SLE. The marine end member contributed 94% of the deepwater by volume while freshwater mainly flown from the Saguenay River supplied the remaining 6%. Implications of our results for remote sensing-based assessments of primary productivity, surface water circulation , and water column photochemistry in the SLE are also discussed. 1.1. Overview of absorption properties in marine waters Chromophoric dissolved organic matter (CDOM), phytoplankton, and non-algal particles (NAP) are major ultraviolet (UV) and visible light absorbing constituents in the ocean. Independent variability of these optically active constituents complicates remote sensing of chlorophyll a concentration (chl a) and hence the estimation of primary productivity from ocean color imagery (e.g. Antoine et al., 1996; Behrenfeld and Falkowski, 1997). Using standard empirical algorithms to retrieve …
Crystallization kinetics and microstructures of NaF-CaF_2-Al_2O_3-SiO_2 glass-ceramics
The Yb3+-doped transparent oxyfluoride glass in NaF-CaF2-Al2O3-SiO2 system was prepared by conventional melt-quenching method and the transparent oxyfluoride glass-ceramics was obtained by optimized heat treating. The influences of introducing alkali metals oxides and alkaline earth oxides on glass-forming ability were investigated by differential thermal analysis (DTA), X-ray diffraction (XRD) analysis and transmission electron microscopy (TEM). The crystallization mechanism of NaF-CaF2-Al2O3-SiO2 system glass was analyzed by kinetics method, and the influences of heat treating conditions on the crystallization behavior and the microstructures of this system glass-ceramic were studied. The results show that the glass-forming ability is weakened with introducing alkali metals oxides, while the crystallization stability is improved by introducing alkaline earth oxides. In addition, the main crystal phase of the glass-ceramics is CaF2 and its crystallization activation energy is 345.8 kJ/mol. The size of CaF2 grains increases with increasing crystallization temperature and the amount of crystals increases with increasing holding time. A novel transparent Yb3+-doped glass-ceramics is obtained by optimizing heat treating, in which the grain size is less than 50 nm and the crystallization degree is about 30%.
Time dependent adaptive pricing for mobile internet access
Aiming to meet the explosive growth of mobile data traffic and reduce the network congestion, we study Time Dependent Adaptive Pricing (TDAP) with threshold policies to motivate users to shift their Internet access from peak hours to off-peak hours. With the proposed TDAP scheme, Internet Service Providers (ISPs) will be able to use less network capacity to provide users Internet access service with the same QoS. Simulation and analysis are carried out to investigate the performance of the proposed TDAP scheme based on the real Internet traffic pattern.
Global-and-local attention networks for visual recognition
State-of-the-art deep convolutional networks (DCNs) such as squeeze-andexcitation (SE) residual networks implement a form of attention, also known as contextual guidance, which is derived from global image features. Here, we explore a complementary form of attention, known as visual saliency, which is derived from local image features. We extend the SE module with a novel global-and-local attention (GALA) module which combines both forms of attention – resulting in state-of-the-art accuracy on ILSVRC. We further describe ClickMe.ai, a largescale online experiment designed for human participants to identify diagnostic image regions to co-train a GALA network. Adding humans-in-the-loop is shown to significantly improve network accuracy, while also yielding visual features that are more interpretable and more similar to those used by human observers.
Marketing Science , Models , Monopoly Models , and Why We Need Them
Despite popular belief, the marketing discipline was an offshoot of economics (Bartels 1951, Sheth et al. 1988). Early researchers, often educated in economics, felt economics was preoccupied with variables such as prices and costs. They focused on other variables. They also investigated marketing as a productive activity (i.e., adding value) by adopting different sets of assumptions other than economics (Bartels 1976). Were profit variations across firms caused entirely by random exogenous factors beyond managerial control, marketing would have no ‘‘valueadded,’’ and the discipline of marketing would have diminished purpose. Mathematical marketing models appeared much later as a precise, logical and scientific way to ‘‘add value.’’ By 1970, researchers developed mathematical models for many purposes, including better forecasting, integration of data, and understanding of markets. Just a few of the many high-impact pre-1970 marketing models include those of Bass (1969a, 1969b), Bass and Parsons (1969), Cox (1967), Frank et al. (1965), Friedman (1958), Green and Rao (1970), Kuehn (1962), Little and Lodish (1969), Little et al. (1963), Morrison (1969), Telser (1960, 1962), Urban (1969), Vidale and Wolfe (1957), and Winters (1960). Today, mathematical models in Marketing Science cover many topics. Just a few examples (presented alphabetically) include: advance selling (Xie and Shugan 2001); creating customer satisfaction (Anderson and Sullivan 1993); direct marketing (e.g., Rossi et al. 1996); forming empirical generalizations (e.g., Mahajan et al. 1995); identifying first-mover advantages (e.g., Kalyanaram et al. 1995); implementing a wide range of marketing instruments (e.g., Padmanabhan and Rao 1993, Shaffer and Zhang 1995); modeling customer heterogeneity (Gonul and Srinivasan 1993); implementing the new empirical IO (Kadiyali et al. 2000); creating new product development (e.g., Griffin and Hauser 1993); understanding channels (e.g., Messinger and Narasimhan 1995); understanding dynamic brand choice (e.g., Erdem and Keane 1996); and understanding market evolution (e.g., Dekimpe and Hanssens 1995). Mathematics, as the language of science, allows interplay between empirical and theoretical research (Hauser 1985). Marketing Science publishes both empirical and theoretical mathematical models. Precise definitions of mathematical marketing models are controversial (Leeflang et al. 2000). We could narrowly define models as mathematical optimizations of marketing variables. However, Marketing Science adopts a more generic definition, which includes mathematical representations that answer important research questions in marketing. That definition allows publication of mathematical models from a multitude of disciplines, including Management Science, Statistics, Econometrics, Economics, Psychometrics, and Psychology. Note that the following discussion only considers models used by researchers. Models for use by * Steven M. Shugan is the Russell Berrie Foundation Eminent Scholar of Marketing.
Influenza‐associated mortality determined from all‐cause mortality, Denmark 2010/11‐2016/17: The FluMOMO model
BACKGROUND In temperate zones, all-cause mortality exhibits a marked seasonality, and influenza represents a major cause of winter excess mortality. We present a statistical model, FluMOMO, which estimate influenza-associated mortality from all-cause mortality data and apply it to Danish data from 2010/11 to 2016/17. METHODS We applied a multivariable time series model with all-cause mortality as outcome, influenza activity and extreme temperatures as explanatory variables while adjusting for time trend and seasonality. Three indicators of weekly influenza activity (IA) were explored: percentage of consultations for influenza-like illness (ILI) at primary health care, national percentage of influenza-positive samples, and the product of ILI percentage and percentage of influenza-positive specimens in a given week, that is, the Goldstein index. RESULTS Independent of the choice of parameter to represent influenza activity, the estimated influenza-associated mortality showed similar patterns with the Goldstein index being the most conservative. Over the 7 winter seasons, the median influenza-associated mortality per 100 000 population was 17.6 (range: 0.0-36.8), 14.1 (0.3-31.6) and 8.3 (0.0-25.0) for the 3 indicators, respectively, for all ages. CONCLUSION The FluMOMO model fitted the Danish data well and has the potential to estimate all-cause influenza-associated mortality in near real time and could be used as a standardised method in other countries. We recommend using the Goldstein index as the influenza activity indicator in the FluMOMO model. Further work is needed to improve the interpretation of the estimated effects.
Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification
Despite the steady progress in video analysis led by the adoption of convolutional neural networks (CNNs), the relative improvement has been less drastic as that in 2D static image classification. Three main challenges exist including spatial (image) feature representation, temporal information representation, and model/computation complexity. It was recently shown by Carreira and Zisserman that 3D CNNs, inflated from 2D networks and pretrained on ImageNet, could be a promising way for spatial and temporal representation learning. However, as for model/computation complexity, 3D CNNs are much more expensive than 2D CNNs and prone to overfit. We seek a balance between speed and accuracy by building an effective and efficient video classification system through systematic exploration of critical network design choices. In particular, we show that it is possible to replace many of the 3D convolutions by low-cost 2D convolutions. Rather surprisingly, best result (in both speed and accuracy) is achieved when replacing the 3D convolutions at the bottom of the network, suggesting that temporal representation learning on high-level “semantic” features is more useful. Our conclusion generalizes to datasets with very different properties. When combined with several other cost-effective designs including separable spatial/temporal convolution and feature gating, our system results in an effective video classification system that that produces very competitive results on several action classification benchmarks (Kinetics, Something-something, UCF101 and HMDB), as well as two action detection (localization) benchmarks (JHMDB and UCF101-24).
Safe robot navigation using an omnidirectional camera
The paper presents a safe robot navigation system based on omnidirectional vision. The 360 degree camera images are analyzed for obstacle detection and avoidance and of course for navigating safely in the given indoor environment. This module can process images in real time and extracts the direction and distance information of the obstacles from the camera system mounted on the robot. This two data is the output of the module. Because of the distortions of the omnidirectional vision, it is necessary to calibrate the camera and not only for that but also to get the right direction and distances information. Several image processing methods and technics were used which are investigated in the rest of this paper.
Pruning and Dynamic Scheduling of Cost-Sensitive Ensembles
Previous research has shown that averaging ensemble can scale up learning over very large cost-sensitive datasets with linear speedup independent of the learning algorithms. At the same time, it achieves the same or even better accuracy than a single model computed from the entire dataset. However, one major drawback is its inefficiency in prediction since every base model in the ensemble has to be consulted in order to produce a final prediction. In this paper, we propose several approaches to reduce the number of base classifiers. Among various methods explored, our empirical studies have shown that the benefit-based greedy approach can safely remove more than 90% of the base models while maintaining or even exceeding the prediction accuracy of the original ensemble. Assuming that each base classifier consumes one unit of prediction time, the removal of 90% of base classifiers translates to a prediction speedup of 10 times. On top of pruning, we propose a novel dynamic scheduling approach to further reduce the "expected" number of classifiers employed in prediction. It measures the confidence of a prediction by a subset of classifiers in the pruned ensemble. This confidence is used to decide if more classifiers are needed in order to produce a prediction that is the same as the original unpruned ensemble. This approach reduces the "expected" number of classifiers by another 25% to 75% without loss of accuracy.
Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting
Timely accurate traffic forecast is crucial for urban traffic control and guidance. Due to the high nonlinearity and complexity of traffic flow, traditional methods cannot satisfy the requirements of mid-and-long term prediction tasks and often neglect spatial and temporal dependencies. In this paper, we propose a novel deep learning framework, Spatio-Temporal Graph Convolutional Networks (STGCN), to tackle the time series prediction problem in traffic domain. Instead of applying regular convolutional and recurrent units, we formulate the problem on graphs and build the model with complete convolutional structures, which enable much faster training speed with fewer parameters. Experiments show that our model STGCN effectively captures comprehensive spatio-temporal correlations through modeling multi-scale traffic networks and consistently outperforms state-of-the-art baselines on various real-world traffic datasets.
A Harmonic Suppressed Bandpass Filter and its Application in Diplexer
A multilayer bandpass filter (BPF) with harmonic suppression using meander line inductor and interdigital capacitor (MLI-IDC) resonant structure is presented in this letter. The BPF is fabricated with three unit cells and its measured passband center frequency is 2.56 GHz with a bandwidth of 0.38 GHz and an insertion loss of 1.5 dB. The harmonics are suppressed up to 11 GHz. A diplexer using the proposed BPF is also presented. The proposed diplexer consists of 4.32 mm sized unit cells to couple 2.5 GHz signal into port 2, and 3.65 mm sized unit cells to couple 3.7 GHz signal into port 3. The notch circuit is placed on the output lines of the diplexer to improve isolation. The proposed diplexer has demonstrated insertion loss of 1.35 dB with 0.45 GHz bandwidth in port 2 and 1.73 dB insertion loss with 0.44 GHz bandwidth in port 3. The isolation is better than 18 dB in the first passband with 38 dB maximum isolation at 2.5 GHz. The isolation in the second passband is better than 26 dB with 45 dB maximum isolation at 3.7 GHz.
Improved visualisation of real-time recordings during third generation cryoballoon ablation: a comparison between the novel short-tip and the second generation device
The third-generation Cryoballoon Advance Short-tip (CB-ST) has been designed with a 40 % shortened tip length compared with the former second generation CB advance device (CB-A). Ideally, a shorter tip should permit an improved visualisation of real-time recordings in the pulmonary vein (PV) due to a more proximal positioning of the inner lumen mapping catheter. We sought to compare the incidence of visualisation of real-time recordings in patients having undergone ablation with the CB-ST with patients having received CB-A ablation. All patients having undergone CB ablation using CB-ST technology and the last 500 consecutive patients having undergone CB-A ablation were analysed. Exclusion criteria were the presence of an intracavitary thrombus, uncontrolled heart failure, moderate or severe valvular disease, and contraindications to general anaesthesia. A total of 600 consecutive patients (58.1 ± 12.9 years, 64 % males) were evaluated (100 CB-ST and 500 CB-A ablations). Real-time recordings were significantly more prevalent in the CB-ST population compared with CB-A group (85.7 vs 67.2 %, p < 0.0001). Real-time recordings could be more frequently visualised in the CB-ST group in all types of veins (LSPV 89 vs 73.4 %, p = 0.0005; LIPV 84 vs 65.6 %, p = 0.0002; RSPV 87 vs 67.4 %, p < 0.0001; RIPV 83 vs 62.4 %, p < 0.0001). The rate of visualisation of real-time recordings is significantly higher during third-generation CB-ST ablation if compared to the second-generation CB-A device. Real-time recordings can be visualised in approximately 85.7 % of veins with this novel cryoballoon.
High-resolution mapping of large gas emitting mud volcanoes on the Egyptian continental margin (Nile Deep Sea Fan) by AUV surveys.
Two highly active mud volcanoes located in 990–1,265 m water depths were mapped on the northern Egyptian continental slope during the BIONIL expedition of R/V Meteor in October 2006. Highresolution swath bathymetry and backscatter imagery were acquired with an autonomous underwater vehicle (AUV)-mounted multibeam echosounder, operating at a frequency of 200 kHz. Data allowed for the construction of ~1 m pixel bathymetry and backscatter maps. The newly produced maps provide details of the seabed morphology and texture, and insights into the formation of the two mud volcanoes. They also contain key indicators on the distribution of seepage and its tectonic control. The acquisition of high-resolution seafloor bathymetry and acoustic imagery maps with an AUV-mounted multibeam echosounder fills the gap in spatial scale between conventional multibeam data collected from a surface vessel and in situ video observations made from a manned submersible or a remotely operating vehicle.
To Trust Or Not To Trust A Classifier
Knowing when a classifier’s prediction can be trusted is useful in many applications and critical for safely using AI. While the bulk of the effort in machine learning research has been towards improving classifier performance, understanding when a classifier’s predictions should and should not be trusted has received far less attention. The standard approach is to use the classifier’s discriminant or confidence score; however, we show there exists an alternative that is more effective in many situations. We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier’s confidence score as well as many other baselines. Further, under some mild distributional assumptions, we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.
Analyzing the Relationship between Regulation and Investment in the Telecom Sector
This study analyses the relationship between entry regulation and infrastructure investment in the telecommunication sector. The empirical analysis we conduct is based on a comprehensive data set covering 180 fixed-line and mobile operators in 25 European countries over 10 years and employs a newly created indicator measuring regulatory intensity in the European countries. We carefully treat the endogeneity problem of regulation by applying instrumental variables and find that tough entry regulation (e.g. unbundling) discourages infrastructure investment by entrants but has no effect on incumbents in fixed-line telecommunications. We do not find significant impact of entry regulation on investment in mobile telephony.
Mining term association patterns from search logs for effective query reformulation
Search engine logs are an emerging new type of data that offers interesting opportunities for data mining. Existing work on mining such data has mostly attempted to discover knowledge at the level of queries (e.g., query clusters). In this paper, we propose to mine search engine logs for patterns at the level of terms through analyzing the relations of terms inside a query. We define two novel term association patterns (i.e., context-sensitive term substitutions and term additions) and propose new methods for mining such patterns from search engine logs. These two patterns can be used to address the mis-specification and under-specification problems of ineffective queries. Experiment results on real search engine logs show that the mined context-sensitive term substitutions can be used to effectively reword queries and improve their accuracy, while the mined context-sensitive term addition patterns can be used to support query refinement in a more effective way.
Patterns and regulation of dissolved organic carbon: An analysis of 7,500 widely distributed lakes
Dissolved organic carbon (DOC) is a key parameter in lakes that can affect numerous features, including microbial metabolism, light climate, acidity, and primary production. In an attempt to understand the factors that regulate DOC in lakes, we assembled a large database (7,514 lakes from 6 continents) of DOC concentrations and other parameters that characterize the conditions in the lakes, the catchment, the soil, and the climate. DOC concentrations were in the range 0.1–332 mg L21, and the median was 5.71 mg L21. A partial least squares regression explained 48% of the variability in lake DOC and showed that altitude, mean annual runoff, and precipitation were negatively correlated with lake DOC, while conductivity, soil carbon density, and soil C : N ratio were positively related with lake DOC. A multiple linear regression using altitude, mean annual runoff, and soil carbon density as predictors explained 40% of the variability in lake DOC. While lake area and drainage ratio (catchment : lake area) were not correlated to lake DOC in the global data set, these two factors explained significant variation of the residuals of the multiple linear regression model in several regional subsets of data. These results suggest a hierarchical regulation of DOC in lakes, where climatic and topographic characteristics set the possible range of DOC concentrations of a certain region, and catchment and lake properties then regulate the DOC concentration in each individual lake. Dissolved organic carbon (DOC) is a major modulator of the structure and function of lake ecosystems. The DOC pool of lakes consists of both autochthonous DOC (i.e., produced in the lake) and allochthonous DOC (i.e., produced in the catchment), although allochthonous DOC is generally believed to represent the larger fraction of the total DOC in lakes. Due to the dark color of many DOC compounds, DOC affects the thermal structure and mixing depth of lakes (Fee et al. 1996). DOC is an important regulator of ecosystem production, since its absorptive properties impede photosynthesis (Jones 1998), while at the same time, it serves as a substrate for heterotrophic bacteria (Tranvik 1988). These compounded effects of DOC on primary and secondary production result in many lakes being net heterotrophic ecosystems, with the consequence that they export CO2 to the atmosphere (Sobek et al. 2005). Furthermore, DOC can affect the fate of other solutes, such as metals (Perdue 1998) or organic contaminants (Haitzer et al. 1998). Also, DOC is an efficient protector of lake biota from harmful ultraviolet (UV) radiation (Molot et al. 2004), while hormone-like effects of DOC may affect the physiology of aquatic animals (Steinberg et al. 2004). This wide array of effects on lake ecosystems explains the considerable interest that DOC has received during the last decades. Principally, the DOC concentration in lakes is regulated in a hierarchical manner (Mulholland et al. 1990). Allochthonous DOC is leached from terrestrial soils to lakes, and the amount of DOC released from soils is determined by the water yield and by the production of leachable organic carbon in soils. In the lakes, autochthonous DOC is produced and, together with allochthonous 1 To whom correspondence should be addressed. Present address: Institute of Biogeochemistry and Pollutant Dynamics, Department of Environmental Sciences, Swiss Federal Institute of Technology Zurich (ETH), Universitätsstrasse 16, 8092 Zürich, Switzerland. Tel: +41-44-6328335, fax: +41-44-6331122 (sebastian. [email protected]).
Tutorial on Variational Autoencoders
In just three years, Variational Autoencoders (VAEs) have emerged as one of the most popular approaches to unsupervised learning of complicated distributions. VAEs are appealing because they are built on top of standard function approximators (neural networks), and can be trained with stochastic gradient descent. VAEs have already shown promise in generating many kinds of complicated data, including handwritten digits [1, 2], faces [1, 3, 4], house numbers [5, 6], CIFAR images [6], physical models of scenes [4], segmentation [7], and predicting the future from static images [8]. This tutorial introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior. No prior knowledge of variational Bayesian methods is assumed.
Is EQ-5D a valid quality of life instrument in patients with Parkinson's disease? A study in Singapore.
INTRODUCTION The purpose of the present study was to evaluate the validity of the EQ-5D in patients with Parkinson's disease (PD) in Singapore. MATERIALS AND METHODS In a cross-sectional survey, patients with PD completed English or Chinese version of the EQ-5D, the 8-item Parkinson's disease questionnaire (PDQ-8), and questions assessing socio-demographic and health characteristics. Clinical data were retrieved from patients' medical records. The validity of the EQ-5D was assessed by testing a-priori hypotheses relating the EQ-5D to the PDQ-8 and clinical data. RESULTS Two hundred and eight PD patients (English speaking: 135) participated in the study. Spearman correlation coefficients between the EQ-5D and PDQ-8 ranged from 0.25 to 0.75 for English-speaking patients and from 0.16 to 0.67 for Chinese-speaking patients. By and large, the EQ-5D scores were weakly or moderately correlated with Hoehn and Yahr stage (correlation coefficients: 0.05 to 0.43), Schwab and England Activities of Daily Living score (correlation coefficients: 0.10 to 0.60), and duration of PD (correlation coefficients: 0.16 to 0.43). The EQ-5D index scores for patients with dyskinesia or "wearing off" periods were significantly lower than those without these problems. The EQ-5D Visual Analog Scale (EQ-VAS) scores also differed for English-speaking patients with deferring dyskinesia, "wearing off" periods, or health transition status; however, such differences were not observed in patients who completed the survey in Chinese. CONCLUSIONS The EQ-5D questionnaire appears valid for measuring quality of life in patients with PD in Singapore. However, the validity of EQ-VAS in Chinese-speaking patients with PD should be further assessed.
Inkjet printed System-in-Package design and manufacturing
Additive manufacturing technology using inkjet offers several improvements to electronics manufacturing compared to current nonadditive masking technologies. Manufacturing processes can be made more efficient, straightforward and flexible compared to subtractive masking processes, several time-consuming and expensive steps can be omitted. Due to the additive process, material loss is minimal, because material is never removed as with etching processes. The amounts of used material and waste are smaller, which is advantageous in both productivity and environmental means. Furthermore, the additive inkjet manufacturing process is flexible allowing fast prototyping, easy design changes and personalization of products. Additive inkjet processing offers new possibilities to electronics integration, by enabling direct writing on various surfaces, and component interconnection without a specific substrate. The design and manufacturing of inkjet printed modules differs notably from the traditional way to manufacture electronics. In this study a multilayer inkjet interconnection process to integrate functional systems was demonstrated, and the issues regarding the design and manufacturing were considered. r 2008 Elsevier Ltd. All rights reserved.
A survey and extension of high efficiency grid connected transformerless solar inverters with focus on leakage current characteristics
In the utility grid interconnection of photovoltaic (PV) energy sources, inverters determine the overall system performance, which result in the demand to route the grid connected transformerless PV inverters (GCTIs) for residential and commercial applications, especially due to their high efficiency, light weight, and low cost benefits. In spite of these benefits of GCTIs, leakage currents due to distributed PV module parasitic capacitances are a major issue in the interconnection, as they are undesired because of safety, reliability, protective coordination, electromagnetic compatibility, and PV module lifetime issues. This paper classifies the kW and above range power rating GCTI topologies based on their leakage current attributes and investigates and/illustrates their leakage current characteristics by making use of detailed microscopic waveforms of a representative topology of each class. The cause and quantity of leakage current for each class are identified, not only providing a good understanding, but also aiding the performance comparison and inverter design. With the leakage current characteristic investigation, the study places most topologies under small number of classes with similar leakage current attributes facilitating understanding, evaluating, and the design of GCTIs. Establishing a clear relation between the topology type and leakage current characteristic, the topology families are extended with new members, providing the design engineers a variety of GCTI topology configurations with different characteristics.
Solution of differential-difference equations by using differential transform method
In this work, we successfully extended differential transform method (DTM), by presenting and proving new theorems, to the solution of differential–difference equations (DDEs). Theorems are presented in the most general form to cover a wide range of DDEs, being linear or nonlinear and constant or variable coefficient. In order to show the power and the robustness of the method and to illustrate the pertinent features of related theorems, examples are presented. 2006 Elsevier Inc. All rights reserved.
LEARNING ENGLISH IS FUN VIA KAHOOT : STUDENTS ’ ATTITUDE , MOTIVATION AND PERCEPTIONS
This study is aimed to investigate primary level students’ attitude and motivation and perceptions in learning English using Kahoot game. According to current development in Malaysian education field, 21st century learning style is more focused and given importance. Thus, Kahoot, game-based learning platform has been used to stimulate students’ from a rural area in southern Malaysia to learn English as their second language more effectively, actively and interestingly. The research design for this study is action research. Nine students were chosen as the target group for this study. The Kahoot game was conducted for three different topics in English after they completed the lesson each day. The data was collected using Attitude and Motivation Test Battery questionnaires consisting of 10 items, interview session with 5 semi-structured questions for three selected students and results of each Kahoot game played by the students. The data was analysed using descriptive analysis. The results of the questionnaires, interview session and results of the game were shown in figures and tables. Findings of the study show that all the nine students were able to engage actively in the game and they were able to master the target language effectively. They enjoy learning English using games and wish to have more games in future.
Detecting Foggy Images and Estimating the Haze Degree Factor
At present, most outdoor video-surveillance, driver-assistance and optical remote sensing systems have been designed to work under good visibility and weather conditions. Poor visibility often occurs in foggy or hazy weather conditions and can strongly influence the accuracy or even the general functionality of such vision systems. Consequently, it is important to import actual weather-condition data to the appropriate processing mode. Recently, significant progress has been made in haze removal from a single image [1,2]. Based on the hazy weather classification, specialized approaches, such as a dehazing process, can be employed to improve recognition. Figure 1 shows a sample processing flow of our dehazing program.
Supporting workflow in a course management system
CMS is a secure and scalable web-based course management system developed by the Cornell University Computer Science Department. The system was designed to simplify, streamline, and automate many aspects of the workflow associated with running a large course, such as course creation, importing students, management of student workgroups, online submission of assignments, assignment of graders, grading, handling regrade requests, and preparation of final grades. In contrast, other course management systems of which we are aware provide only specialized solutions for specific components, such as grading. CMS is increasingly widely used for course management at Cornell University. In this paper we articulate the principles we followed in designing the system and describe the features that users found most useful.
Changes in levels of catalase and glutathione in erythrocytes of patients with stable asthma, treated with beclomethasone dipropionate.
In asthmatic patients, antioxidant defence is decreased. Although inhaled corticosteroids decrease asthmatic inflammation and modulate reactive oxygen species (ROS) generation, little is known of their effect on cellular antioxidant levels. The aim of this study was to evaluate the effect of inhaled beclomethasone dipropionate (BDP; 1,000 microg x day(-1)) on erythrocyte antioxidant levels in stable asthmatic patients. Forty patients with stable, mild asthma were treated in a double-blind, placebo-controlled, parallel-group study with BDP 250 microg, two puffs b.i.d. for 6 weeks. At entry and every 2 weeks during treatment, erythrocyte antioxidant levels, haematological parameters, pulmonary function tests and asthma symptoms were determined. The results show that during treatment with BDP, erythrocyte catalase levels increased (at entry (mean +/-SEM) 41+/-4, after 6 weeks 54+/-4 micromol H2O2 x min(-1) x g haemoglobin (Hb)(-1), p = 0.05 in comparison with placebo). Erythrocyte total glutathione levels significantly decreased after 6 weeks treatment with BDP (from 7.0+/-0.4 to 6.6+/-0.3 micromol x g Hb(-1) (p = 0.04)). In the BDP-treated patients, blood eosinophil counts were higher in patients who responded with an increase in erythrocyte catalase levels during BDP treatment, as compared to those not responding ((mean +/-SEM) 340+/-39 and 153+/-52x10(6) cells x L(-1), respectively, p = 0.05). The present study shows that treatment with inhaled bedomethasone dipropionate results in changes in antioxidant levels in erythrocytes of patients with stable, mild asthma.
Reconciling mobile app privacy and usability on smartphones: could user privacy profiles help?
As they compete for developers, mobile app ecosystems have been exposing a growing number of APIs through their software development kits. Many of these APIs involve accessing sensitive functionality and/or user data and require approval by users. Android for instance allows developers to select from over 130 possible permissions. Expecting users to review and possibly adjust settings related to these permissions has proven unrealistic. In this paper, we report on the results of a study analyzing people's privacy preferences when it comes to granting permissions to different mobile apps. Our results suggest that, while people's mobile app privacy preferences are diverse, a relatively small number of profiles can be identified that offer the promise of significantly simplifying the decisions mobile users have to make. Specifically, our results are based on the analysis of settings of 4.8 million smartphone users of a mobile security and privacy platform. The platform relies on a rooted version of Android where users are allowed to choose between "granting", "denying" or "requesting to be dynamically prompted" when it comes to granting 12 different Android permissions to mobile apps they have downloaded.
Adaptive inventory control models for supply chain management
Uncertainties inherent in customer demands make it difficult for supply chains to achieve just-in-time inventory replenishment, resulting in loosing sales opportunities or keeping excessive chain-wide inventories. In this paper, we propose two adaptive inventory-control models for a supply chain consisting of one supplier and multiple retailers. The one is a centralized model and the other is a decentralized model. The objective of the two models is to satisfy a target service level predefined for each retailer. The inventory-control parameters of the supplier and retailers are safety lead time and safety stocks, respectively. Unlike most extant inventory-control approaches, modelling the uncertainty of customer demand as a statistical distribution is not a prerequisite in the two models. Instead, using a reinforcement learning technique called action-value method, the control parameters are designed to adaptively change as customerdemand patterns changes. A simulation-based experiment was C.O. Kim ( ) School of Computer and Industrial Engineering, Yonsei University, Seoul, 120-749, Republic of Korea E-mail: [email protected] Tel: (+822) 2123-2711 Fax: (+822) 364-7807 J. Jun Research Institute for Information and Communication Technologies, Korea University, Seoul, 136-701, Republic of Korea J.K. Baek Department of Industrial System Management, Seoul College, Seoul, 131-702, Republic of Korea R.L. Smith Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, Michigan 48109-2117, USA Y.D. Kim Department of Industrial Engineering, Korea Advanced Institute of Science and Technology, Daejon, 305-701, Republic of Korea performed to compare the performance of the two inventorycontrol models.
Reflectometry with a scanning laser ophthalmoscope.
We describe noninvasive techniques to optimize reflectometry measurements, particularly retinal densitometry, which measures the photopigment density difference. With these techniques unwanted scattered light is greatly reduced, and the retina is visualized during measurements. Thus results may be compared for each retinal location, and visible artifacts are minimized. The density difference measurements of the cone photopigment depend on the optical configuration of the apparatus. The cone photopigment density difference is greatest near the fovea and for most observers decreases rapidly with eccentricity. A research version for reflectometry and psychophysics of the scanning laser ophthalmoscope is described.
Post-sedimentary tectonic evolution,cap rock condition and hydrocarbon information of Carboniferous-Permian in Ejin Banner and its vicinities,western Inner Mongolia:a study of Carboniferous-Permian petroleum geological conditions(part 3)
Based on an analysis of Mesozoic-Cenozoic sedimentary evolvement and a study of structural reformation after Carboniferous-Permian and Mesozoic magmatism,the authors hold that Cretaceous was a flourishing period of Yingen-Ejin Benner basin evolution: there existed the combination of lacustrine facies-swamp facies sandstone and mudstone characterized by large thickness and stable horizontal spread,and Cretaceous mudstone consitituted a good regional cap of the Carboniferous-Permian petroleum system.Although multistage tectonic reworking took place after Carboniferous-Permian,tectonic stress was dominated by extruding and uplifting.In general,the extrusion and uplift didn't affect the Carboniferous-Permian petroleum system except for local areas where structural dynamometamorphism occurred.There happened four stages of magmatism,in which the Early Cretaceous magmatism was strong.Affected by this magmatism,Carboniferous-Permian strata locally experienced thermal metamorphism,and hydrocarbon source rocks reached the mature stage,but the effect range was limited.There were four sedimentary cycles during Carboniferous-Permian;as a result,there existed mud shale of large thickness and stable horizontal spread especially in Amushan and Maihanhada Formations,and the mud shale was not only a good source rock but also a good cap rock.There was a long-term uplift-erosion from Late Variscan to Indo-Chinese epoch in Carboniferous-Permian,but there existed stable spread of Carboniferous-Permian strata in Ejin Banner-Wutaohai area based on geophysical data interpretation,with the residual thickness being generally from 1000 to 2000m and the thickness of local areas being over 3000m.Information of a series of Carboniferous-Permian hydrocarbon source rocks related hydrocarbon shows indicate that there existed processes of hydrocarbon generation,migration and accumulation in Carboniferous-Permian period,which suggests good hydrocarbon prospects.
Cryolipolysis for Targeted Fat Reduction and Improved Appearance of the Enlarged Male Breast.
BACKGROUND Pseudogynecomastia refers to benign male breast enlargement due to excess subareolar fat. Standard treatment is surgical excision under general anesthesia, liposuction, or a combination of both. OBJECTIVE The safety and efficacy of cryolipolysis was investigated for nonsurgical treatment of pseudogynecomastia. METHODS AND MATERIALS Enrollment consisted of 21 males with pseudogynecomastia. Subjects received a first treatment consisting of a 60-minute cryolipolysis cycle, followed by a two-minute massage, and a second 60-minute cycle with 50% treatment area overlap. At 60 days of follow-up, subjects received a second 60-minute treatment. Safety was evaluated by monitoring side effects and adverse events. Efficacy was assessed by ultrasound, clinical photographs, and subject surveys. RESULTS Surveys revealed that 95% of subjects reported improved visual appearance and 89% reported reduced embarrassment associated with pseudogynecomastia. Ultrasound showed mean fat layer reduction of 1.6 ± 1.2 mm. Blinded reviewers correctly identified 82% of baseline photographs. Side effects included mild discomfort during treatment and transient paresthesia and tenderness. One case of paradoxical hyperplasia (PH) occurred but likelihood of PH in the male breast is not believed to be greater than in any other treatment area. CONCLUSION This study demonstrated feasibility of cryolipolysis for safe, effective, and well-tolerated nonsurgical treatment of pseudogynecomastia.
An Intelligent Personal Assistant for Task and Time Management
We describe an intelligent personal assistant that has been developed to aid a busy knowledge worker in managing time commitments and performing tasks. The design of the system was motivated by the complementary objectives of (a) relieving the user of routine tasks, thus allowing her to focus on tasks that critically require human problem-solving skills, and (b) intervening in situations where cognitive overload leads to oversights or mistakes by the user. The system draws on a diverse set of AI technologies that are linked within a Belief-DesireIntention agent system. Although the system provides a number of automated functions, the overall framework is highly user-centric in its support for human needs, responsiveness to human inputs, and adaptivity to user working style and preferences.
Endovascular treatment of ruptured thoracic aortic aneurysm in patients older than 75 years.
OBJECTIVES To investigate the outcomes of thoracic endovascular aortic repair (TEVAR) for ruptured descending thoracic aortic aneurysm (rDTAA) in patients older than 75 years. METHODS We retrospectively identified all patients treated with TEVAR for rDTAA at seven referral centres between 2002 and 2009. The cohort was stratified according to age ≤75 and >75 years, and the outcomes after TEVAR were compared between both groups. RESULTS Ninety-two patients were identified of which 73% (n = 67) were ≤75 years, and 27% (n = 25) were older than 75 years. The 30-day mortality was 32.0% in patients older than 75 years, and 13.4% in the remaining patients (p = 0.041). Patients older than 75 years suffered more frequently from postoperative stroke (24.0% vs. 1.5%, p = 0.001) and pulmonary complications (40.0% vs. 9.0%, p = 0.001). The aneurysm-related survival after 2 years was 52.1% for patients >75 years, and 83.9% for patients ≤75 years (p = 0.006). CONCLUSIONS Endovascular treatment of rDTAA in patients older than 75 years is associated with an inferior outcome compared with patients younger than 75 years. However, the mortality and morbidity rates in patients above 75 years are still acceptable. These results may indicate that endovascular treatment for patients older than 75 years with rDTAA is worthwhile.
Eye development.
The vertebrate eye comprises tissues from different embryonic origins: the lens and the cornea are derived from the surface ectoderm, but the retina and the epithelial layers of the iris and ciliary body are from the anterior neural plate. The timely action of transcription factors and inductive signals ensure the correct development of the different eye components. Establishing the genetic basis of eye defects in zebrafishes, mouse, and human has been an important tool for the detailed analysis of this complex process. A single eye field forms centrally within the anterior neural plate during gastrulation; it is characterized on the molecular level by the expression of "eye-field transcription factors." The single eye field is separated into two, forming the optic vesicle and later (under influence of the lens placode) the optic cup. The lens develops from the lens placode (surface ectoderm) under influence of the underlying optic vesicle. Pax6 acts in this phase as master control gene, and genes encoding cytoskeletal proteins, structural proteins, or membrane proteins become activated. The cornea forms from the surface ectoderm, and cells from the periocular mesenchyme migrate into the cornea giving rise for the future cornea stroma. Similarly, the iris and ciliary body form from the optic cup. The outer layer of the optic cup becomes the retinal pigmented epithelium, and the main part of the inner layer of the optic cup forms later the neural retina with six different types of cells including the photoreceptors. The retinal ganglion cells grow toward the optic stalk forming the optic nerve. This review describes the major molecular players and cellular processes during eye development as they are known from frogs, zebrafish, chick, and mice-showing also differences among species and missing links for future research. The relevance to human disorders is one of the major aspects covered throughout the review.
Color image decomposition and restoration
Meyer has recently introduced an image decomposition model to split an image into two components: a geometrical component and a texture (oscillatory) component. Inspired by his work, numerical models have been developed to carry out the decomposition of gray scale images. In this paper, we propose a decomposition algorithm for color images. We introduce a generalization of Meyer s G norm to RGB vectorial color images, and use Chromaticity and Brightness color model with total variation minimization. We illustrate our approach with numerical examples. 2005 Elsevier Inc. All rights reserved.
Mesh Location in Open Ventral Hernia Repair: A Systematic Review and Network Meta-analysis
There is no consensus on the ideal location for mesh placement in open ventral hernia repair (OVHR). We aim to identify the mesh location associated with the lowest rate of recurrence following OVHR using a systematic review and meta-analysis. A search was performed for studies comparing at least two of four locations for mesh placement during OVHR (onlay, inlay, sublay, and underlay). Outcomes assessed were hernia recurrence and surgical site infection (SSI). Pairwise meta-analysis was performed to compare all direct treatment of mesh locations. A multiple treatment meta-analysis was performed to compare all mesh locations in the Bayesian framework. Sensitivity analyses were planned for the following: studies with a low risk of bias, incisional hernias, by hernia size, and by mesh type (synthetic or biologic). Twenty-one studies were identified (n = 5,891). Sublay placement of mesh was associated with the lowest risk for recurrence [OR 0.218 (95 % CI 0.06–0.47)] and was the best of the four treatment modalities assessed [Prob (best) = 94.2 %]. Sublay was also associated with the lowest risk for SSI [OR 0.449 (95 % CI 0.12–1.16)] and was the best of the 4 treatment modalities assessed [Prob (best) = 77.3 %]. When only assessing studies at low risk of bias, of incisional hernias, and using synthetic mesh, the probability that sublay had the lowest rate of recurrence and SSI was high. Sublay mesh location has lower complication rates than other mesh locations. While additional randomized controlled trials are needed to validate these findings, this network meta-analysis suggests the probability of sublay being the best location for mesh placement is high.
A global analysis of protected area management effectiveness.
We compiled details of over 8000 assessments of protected area management effectiveness across the world and developed a method for analyzing results across diverse assessment methodologies and indicators. Data was compiled and analyzed for over 4000 of these sites. Management of these protected areas varied from weak to effective, with about 40% showing major deficiencies. About 14% of the surveyed areas showed significant deficiencies across many management effectiveness indicators and hence lacked basic requirements to operate effectively. Strongest management factors recorded on average related to establishment of protected areas (legal establishment, design, legislation and boundary marking) and to effectiveness of governance; while the weakest aspects of management included community benefit programs, resourcing (funding reliability and adequacy, staff numbers and facility and equipment maintenance) and management effectiveness evaluation. Estimations of management outcomes, including both environmental values conservation and impact on communities, were positive. We conclude that in spite of inadequate funding and management process, there are indications that protected areas are contributing to biodiversity conservation and community well-being.
An approach to Korean license plate recognition based on vertical edge matching
License plate recognition (LPR) has many applications in traffic monitoring systems. In this paper, a vertical edge matching based algorithm to recognize Korean license plate from input gray-scale image is proposed. The algorithm is able to recognize license plates in normal shape, as well as plates that are out of shape due to the angle of view. The proposed algorithm is fast enough, the recognition unit of a LPR system can be implemented only in software so that the cost of the system is reduced.
Plug-and-Play Unplugged: Optimization Free Reconstruction using Consensus Equilibrium
Regularized inversion methods for image reconstruction are used widely due to their tractability and their ability to combine complex physical sensor models with useful regularity criteria. Such methods were used in the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem severely limits both the expressiveness of possible regularity conditions and the variety of provably convergent Plug-and-Play denoising operators. In this paper, we introduce the concept of consensus equilibrium (CE), which generalizes regularized inversion to include a much wider variety of regularity operators without the need for an optimization formulation. Consensus equilibrium is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving these equilibrium equations, which can be approached in multiple ways, including as a fixed point problem that generalizes the ADMM approach used in the Plug-and-Play method. We present the Douglas-Rachford (DR) algorithm for computing the CE solution as a fixed point and prove the convergence of this algorithm under conditions that include denoising operators that do not arise from optimization problems and that may not be nonexpansive. We give several examples to illustrate the idea of consensus equilibrium and the convergence properties of the DR algorithm and demonstrate this method on a sparse interpolation problem using electron microscopy data.
Learning Algorithms for Keyphrase Extraction
Many academic journals ask their authors to provide a list of about five to fifteen keywords, to appear on the first page of each article. Since these key words are often phrases of two or more words, we prefer to call them keyphrases. There is a wide variety of tasks for which keyphrases are useful, as we discuss in this paper. We approach the problem of automatically extracting keyphrases from text as a supervised learning task. We treat a document as a set of phrases, which the learning algorithm must learn to classify as positive or negative examples of keyphrases. Our first set of experiments applies the C4.5 decision tree induction algorithm to this learning task. We evaluate the performance of nine different configurations of C4.5. The second set of experiments applies the GenEx algorithm to the task. We developed the GenEx algorithm specifically for automatically extracting keyphrases from text. The experimental results support the claim that a custom-designed algorithm (GenEx), incorporating specialized procedural domain knowledge, can generate better keyphrases than a general-purpose algorithm (C4.5). Subjective human evaluation of the keyphrases generated by GenEx suggests that about 80% of the keyphrases are acceptable to human readers. This level of performance should be satisfactory for a wide variety of applications.
Detection of microaneurysms using multi-scale correlation coefficients
This paper presents a new approach to the computer aided diagnosis (CAD) of diabetic retinopathy (DR)—a common and severe complication of long-term diabetes which damages the retina and cause blindness. Since microaneurysms are regarded as the first signs of DR, there has been extensive research on effective detection and localization of these abnormalities in retinal images. In contrast to existing algorithms, a new approach based on multi-scale correlation filtering (MSCF) and dynamic thresholding is developed. This consists of two levels, microaneurysm candidate detection (coarse level) and true microaneurysm classification (fine level). The approach was evaluated based on two public datasets—ROC (retinopathy on-line challenge, http://roc.healthcare.uiowa.edu) and DIARETDB1 (standard diabetic retinopathy database, http://www.it.lut.fi/project/imageret/diaretdb1). We conclude our method to be effective and efficient. & 2010 Elsevier Ltd. All rights reserved.
Nasal reconstruction: an overview and nuances.
Nasal reconstruction continues to be a formidable challenge for most plastic surgeons. This article provides an overview of nasal reconstruction with brief descriptions of subtle nuances involving certain techniques that the authors believe help their overall outcomes. The major aspects of nasal reconstruction are included: lining, support, skin coverage, local nasal flaps, nasolabial flap, and paramedian forehead flap. The controversy of the subunit reconstruction versus defect-only reconstruction is briefly discussed. The authors believe that strictly adhering to one principle or another limits one's options, and the patient will benefit more if one is able to apply a variety of options for each individualized defect. A different approach to full-thickness skin grafting is also briefly discussed as the authors propose its utility in lower third reconstruction. In general, the surgeon should approach each patient as a distinct individual with a unique defect and thus tailor each reconstruction to fit the patient's needs and expectations. Postoperative care, including dermabrasion, skin care, and counseling, cannot be understated.
Garbage Collection in an Uncooperative Environment
Hans-Juergen Boehm Computer Science Department, Rice University, Houston, TX 77251-1892, U.S.A. Mark Weiser Xerox Corporation, Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CA 94304, U.S.A. A later version of this paper appeared in Software Practice and Experience 18, 9, pp. 807-820. Copyright 1988 by John Wiley and Sons, Ld. The publishers rules appear to allow posting of preprints, but only on the author’s web site.
Design , Analysis and Performance of a Rotary Wing MAV
An initial design concept for a micro-coaxial rotorcraft using custom manufacturing techniques and commercial off-the-shelf components is discussed in this paper. Issues associated with the feasibility of achieving hover and fully functional flight control at small scale for a coaxial rotor configuration are addressed. Results from this initial feasibility study suggest that it is possible to develop a small scale coaxial micro rotorcraft weighing approximately 100 grams, and that available moments are appropriate for roll, yaw and lateral control. A prototype vehicle was built and its rotors were tested in a custom hover stand used to measure Thrust and power. The radio controlled vehicle was flown untethered with its own power source and exhibited good flight stability and control dynamics. The best achievable rotor performance was measured to be 42%.
Aligning Gaussian-Topic with Embedding Network for Summarization Ranking
Query-oriented summarization addresses the problem of information overload and help people get the main ideas within a short time. Summaries are composed by sentences. So, the basic idea of composing a salient summary is to construct quality sentences both for user specific queries and multiple documents. Sentence embedding has been shown effective in summarization tasks. However, these methods lack of the latent topic structure of contents. Hence, the summary lies only on vector space can hardly capture multi-topical content. In this paper, our proposed model incorporates the topical aspects and continuous vector representations, which jointly learns semantic rich representations encoded by vectors. Then, leveraged by topic filtering and embedding ranking model, the summarization can select desirable salient sentences. Experiments demonstrate outstanding performance of our proposed model from the perspectives of prominent topics and semantic coherence.
Collisions of the Second Kind between Ions and Atoms or Molecules
SOME experiments dealing with collisions of the second kind between ions and atoms or molecules have been performed independently and simultaneously in two different laboratories—one at Princeton University and one at the University of California. Through correspondence the different investigators have learned of each other's results and have decided to present jointly, in this letter, a preliminary report of this phenomenon, which heretofore has been unknown. In both researches, the apparatus used are essentially those described in the positive ray experiments by Smyth (Proc. Roy. Soc. and Phys. Rev.) and by Hogness and Lunn (Phys. Rev.). One set of experiments is the preliminary stage of a complete study of the whole phenomenon; the other is incidental to work on the positive ray analysis of nitric oxide.
A Study of Android Malware Detection Techniques and Machine Learning
Android OS is one of the widely used mobile Operating Systems. The number of malicious applications and adwares are increasing constantly on par with the number of mobile devices. A great number of commercial signature based tools are available on the market which prevent to an extent the penetration and distribution of malicious applications. Numerous researches have been conducted which claims that traditional signature based detection system work well up to certain level and malware authors use numerous techniques to evade these tools. So given this state of affairs, there is an increasing need for an alternative, really tough malware detection system to complement and rectify the signature based system. Recent substantial research focused on machine learning algorithms that analyze features from malicious application and use those features to classify and detect unknown malicious applications. This study summarizes the evolution of malware detection techniques based on machine learning algorithms focused
Theory and Algorithms for Forecasting Time Series
We present data-dependent learning bounds for the general scenario of non-stationary nonmixing stochastic processes. Our learning guarantees are expressed in terms of a datadependent measure of sequential complexity and a discrepancy measure that can be estimated from data under some mild assumptions. We also also provide novel analysis of stable time series forecasting algorithm using this new notion of discrepancy that we introduce. We use our learning bounds to devise new algorithms for non-stationary time series forecasting for which we report some preliminary experimental results. An extended abstract has appeared in (Kuznetsov and Mohri, 2015).
Image-to-Markup Generation with Coarse-to-Fine Attention
We present a neural encoder-decoder model to convert images into presentational markup based on a scalable coarse-to-fine attention mechanism. Our method is evaluated in the context of imageto-LaTeX generation, and we introduce a new dataset of real-world rendered mathematical expressions paired with LaTeX markup. We show that unlike neural OCR techniques using CTCbased models, attention-based approaches can tackle this non-standard OCR task. Our approach outperforms classical mathematical OCR systems by a large margin on in-domain rendered data, and, with pretraining, also performs well on out-of-domain handwritten data. To reduce the inference complexity associated with the attention-based approaches, we introduce a new coarse-to-fine attention layer that selects a support region before applying attention.
From Wearable Sensors to Smart Implants-–Toward Pervasive and Personalized Healthcare
Objective: This paper discusses the evolution of pervasive healthcare from its inception for activity recognition using wearable sensors to the future of sensing implant deployment and data processing. Methods: We provide an overview of some of the past milestones and recent developments, categorized into different generations of pervasive sensing applications for health monitoring. This is followed by a review on recent technological advances that have allowed unobtrusive continuous sensing combined with diverse technologies to reshape the clinical workflow for both acute and chronic disease management. We discuss the opportunities of pervasive health monitoring through data linkages with other health informatics systems including the mining of health records, clinical trial databases, multiomics data integration, and social media. Conclusion: Technical advances have supported the evolution of the pervasive health paradigm toward preventative, predictive, personalized, and participatory medicine. Significance: The sensing technologies discussed in this paper and their future evolution will play a key role in realizing the goal of sustainable healthcare systems.
Person Organization Fit, Organizational Commitment and Knowledge Sharing Attitude–an Analytical Study
The purpose of this study is to analyze the relationship among Person Organization Fit (POF), Organizational Commitment (OC) and Knowledge Sharing Attitude (KSA). The paper develops a conceptual frame based on a theory and literature review. A quantitative approach has been used to measure the level of POF and OC as well as to explore the relationship of these variables with KSA & with each other by using a sample of 315 academic managers of public sector institutions of higher education. POF has a positive relationship with OC and KSA. A positive relationship also exists between OC and KSA. It would be an effective contribution in the existing body of knowledge. Managers and other stakeholders may be helped to recognize the significance of POF, OC and KSA as well as their relationship with each other for ensuring selection of employee’s best fitted in the organization and for creating and maintaining a conducive environment for improving organizational commitment and knowledge sharing of the employees which will ultimately result in enhanced efficacy and effectiveness of the organization.
Can Parenting Styles Affect the Children ’ s Development of Narcissism ? A Systematic Review
The aim of this paper is to define whether different types of parenting styles (and which ones) affect the child's development in the direction of narcissism, through a systematic review of the studies on the subject in the literature, considering only research published from the Nineties to today. The ten studies considered in this review are representative of the main approaches used to investigate the association between parenting and the emergence of narcissistic features in children. These studies have used different research methods, operationalizing the concept of parenting in diversified ways and showing sensitivity to the multidimensionality of the construct of narcissism. The results of these studies allow us to say that types of positive parenting are more associated in general with the development of healthy narcissistic tendencies, compatible with the normal physical, mental and adaptive child's development.
Πάντα ῥεῖ καὶ οὐδὲν μένει1
the content of «technical» realization of three special methods during criminalistic cognition: criminalistic identification, criminalistic diagnostics and criminalistic classification. Criminalistic technics (as a system of knowledge) is a branch of the special part of criminalistic theory describing and explaining regularities of emergence of materially fixed traces during investigation of criminal offences. It’s for finding and examining concrete technical means, knowledge and skills are already worked out and recommended.
Neurally-Guided Procedural Models: Learning to Guide Procedural Models with Deep Neural Networks
We present a deep learning approach for speeding up constrained procedural modeling. Probabilistic inference algorithms such as Sequential Monte Carlo (SMC) provide powerful tools for constraining procedural models, but they require many samples to produce desirable results. In this paper, we show how to create procedural models which learn how to satisfy constraints. We augment procedural models with neural networks: these networks control how the model makes random choices based on what output it has generated thus far. We call such a model a neurally-guided procedural model. As a pre-computation, we train these models on constraint-satisfying example outputs generated via SMC. They are then used as efficient importance samplers for SMC, generating high-quality results with very few samples. We evaluate our method on L-system-like models with image-based constraints. Given a desired quality threshold, neurally-guided models can generate satisfactory results up to 10x faster than unguided models. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Geometric algorithms, languages, and systems G.3 [Probability And Statistics]: Probabilistic algorithms (including Monte Carlo) I.2.6 [Artificial Intelligence]: Learning—Connectionism and neural nets
'Unterm reinsten Himmel der unsicherste Boden' : Die Grossstadterfahrung in Goethes Italiendichtung
This study deals with the urban experiences in the “Italian” works of Goethe: The Roman Elegies (1795), The Venetian Epigrams (1790/1800), and The Italian Journey (1816/1817 -1829). The introductive part delivers an overview of the evolution of the idea of the city, from the cities of ancient Greece and the biblical tradition to the modern metropoleis who become synonyms for mobility and nervousness. The second part is dedicated to The Roman Elegies and The Venetian Epigrams, two collections of poems in which the urban space figures prominently. It is then The Italian Journey that delivers the bulk of the material thus allowing the most insights about Goethe’s urban experiences. The study focuses on the four large cities which Goethe has seen in Italy and which he describes in his works: Venice, Palermo, Naples and Rome. Goethe shows the possibilities and the dangers of the metropolis. These possibilities are not presented in an abstract manner but take a rather concrete shape: It is in the large cities where Goethe convalesces and where evolves his personality. This doesn’t prevent him from criticising the life and the events in these cities, be it on the political, spiritual or aesthetic field. Whenever he sees at stake what he considers to be the decisive points in urban life – the historical consciousness, tradition of principles, exchange, and concentration – he criticises. Dissipation, velocity, and pandemonium threaten these urban experiences more than ever. Goethe calls attention to the things that are fading and need to be preserved and he gives the reasons why some urban attainments are worth being protected
iDocChip: A Configurable Hardware Architecture for Historical Document Image Processing: Percentile Based Binarization
End-to-end Optical Character Recognition (OCR) systems are heavily used to convert document images into machine-readable text. Commercial and open-source OCR systems (like Abbyy, OCRopus, Tesseract etc.) have traditionally been optimized for contemporary documents like books, letters, memos, and other end-user documents. However, these systems are difficult to use equally well for digitizing historical document images, which contain degradations like non-uniform shading, bleed-through, and irregular layout; such degradations usually do not exist in contemporary document images. The open-source anyOCR is an end-to-end OCR pipeline, which contains state-of-the-art techniques that are required for digitizing degraded historical archives with high accuracy. However, high accuracy comes at a cost of high computational complexity that results in 1) long runtime that limits digitization of big collection of historical archives and 2) high energy consumption that is the most critical limiting factor for portable devices with constrained energy budget. Therefore, we are targeting energy efficient and high throughput acceleration of the anyOCR pipeline. Generalpurpose computing platforms fail to meet these requirements that makes custom hardware design mandatory. In this paper, we are presenting a new concept named iDocChip. It is a portable hybrid hardware-software FPGA-based accelerator that is characterized by low footprint meaning small size, high power efficiency that will allow using it in portable devices, and high throughput that will make it possible to process big collection of historical archives in real time without effecting the accuracy. In this paper, we focus on binarization, which is the second most critical step in the anyOCR pipeline after text-line recognizer that we have already presented in our previous publication [21]. The anyOCR system makes use of a Percentile Based Binarization method that is suitable for overcoming degradations like non-uniform shading and bleed-through. To the best of our knowledge, we propose the first hardware architecture of the PBB technique. Based on the new architecture, we present a hybrid hardware-software FPGA-based accelerator that outperforms the existing anyOCR software implementation running on i7-4790T in terms of runtime by factor of 21, while achieving energy efficiency of 10 Images/J that is higher than that achieved by low power embedded processors with negligible loss of recognition accuracy.
Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not
Human replicas may elicit unintended cold, eerie feelings in viewers, an effect known as the uncanny valley. Masahiro Mori, who proposed the effect in 1970, attributed it to inconsistencies in the replica's realism with some of its features perceived as human and others as nonhuman. This study aims to determine whether reducing realism consistency in visual features increases the uncanny valley effect. In three rounds of experiments, 548 participants categorized and rated humans, animals, and objects that varied from computer animated to real. Two sets of features were manipulated to reduce realism consistency. (For humans, the sets were eyes-eyelashes-mouth and skin-nose-eyebrows.) Reducing realism consistency caused humans and animals, but not objects, to appear eerier and colder. However, the predictions of a competing theory, proposed by Ernst Jentsch in 1906, were not supported: The most ambiguous representations-those eliciting the greatest category uncertainty-were neither the eeriest nor the coldest.
*SEM 2013 shared task: Semantic Textual Similarity
In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.
Quantum computation, non-demolition measurements, and reflective control in living systems.
Internal computation underlies robust non-equilibrium living process. The smallest details of living systems are molecular devices that realize non-demolition quantum measurements. These smaller devices form larger devices (macromolecular complexes), up to living body. The quantum device possesses its own potential internal quantum state (IQS), which is maintained for a prolonged time via reflective error-correction. Decoherence-free IQS can exhibit itself by a creative generation of iteration limits in the real world. It resembles the properties of a quasi-particle, which interacts with the surround, applying decoherence commands to it. In this framework, enzymes are molecular automata of the extremal quantum computer, the set of which maintains highly ordered robust coherent state, and genome represents a concatenation of error-correcting codes into a single reflective set. The biological evolution can be viewed as a functional evolution of measurement constraints in which limits of iteration are established, possessing criteria of perfection and having selective values.