title
stringlengths
8
300
abstract
stringlengths
0
10k
Effect of Substrate Temperature on Microstructural Characteristics of Thermal Sprayed Superalloys
Recently, there has been a huge interest in application of thermal spraying processes to apply a protective layer on the surface of engineering components. Thermal spraying as a near net shape forming technique has also found applications in manufacturing of advanced engineering components. Spraying methods such as High Velocity Oxygen Fuel (HVOF), Vacuum Plasma Spraying (VPS), and Air Plasma Spraying (APS) are among the most commonly used deposition techniques. Coatings are built up from impact of molten particles on the substrate surface and their flattening and solidification (splat formation). Deposition of millions of individual splats connected to each other at different layers will result in a lamellae type structure. This is a typical example of an anisotropic microstructure. The microstructural features such as porosity, oxide layers define the physical and mechanical properties of coating material. This study investigates the influence of substrate temperature on microstructural characteristics of APS deposited superalloy 625 on steel substrate. The coatings were deposited on substrates at different temperatures. The porosity level was measured using prosimetry. Both image analysis technique and Electron Probe Microanalysis (EPMA) was used to measure the amount of oxide phase. The results indicated that lower substrate temperature results in lower oxide in microstructure. There has been no significant change in porosity level due to substrate temperature.
Semantic Similarity from Natural Language and Ontology Analysis
TOTHE SECONDEDITION In recent years, online social networking has revolutionized interpersonal communication. The newer research on language analysis in social media has been increasingly focusing on the latter’s impact on our daily lives, both on a personal and a professional level. Natural language processing (NLP) is one of the most promising avenues for social media data processing. It is a scientific challenge to develop powerful methods and algorithms which extract relevant information from a large volume of data coming from multiple sources and languages in various formats or in free form. We discuss the challenges in analyzing social media texts in contrast with traditional documents. Research methods in information extraction, automatic categorization and clustering, automatic summarization and indexing, and statistical machine translation need to be adapted to a new kind of data. This book reviews the current research on NLP tools and methods for processing the non-traditional information from social media data that is available in large amounts (big data), and shows how innovative NLP approaches can integrate appropriate linguistic information in various fields such as social media monitoring, healthcare, business intelligence, industry, marketing, and security and defence. We review the existing evaluation metrics for NLP and social media applications, and the new efforts in evaluation campaigns or shared tasks on new datasets collected from social media. Such tasks are organized by the Association for Computational Linguistics (such as SemEval tasks) or by the National Institute of Standards and Technology via the Text REtrieval Conference (TREC) and the Text Analysis Conference (TAC). In the concluding chapter, we discuss the importance of this dynamic discipline and its great potential for NLP in the coming decade, in the context of changes in mobile technology, cloud computing, virtual reality, and social networking. In this second edition, we have added information about recent progress in the tasks and applications presented in the first edition.We discuss newmethods and their results.The number of research projects and publications that use social media data is constantly increasing due to continuously growing amounts of social media data and the need to automatically process them. We have added 85 new references to the more than 300 references from the first edition. Besides updating each section, we have added a new application (digital marketing) to the section on media monitoring and we have augmented the section on healthcare applications with an extended discussion of recent research on detecting signs of mental illness from social media.
The Impact of Sleep on Learning and Behavior in Adolescents
Many adolescents are experiencing a reduction in sleep as a consequence of a variety of behavioral factors (e.g., academic workload, social and employment opportunities), even though scientific evidence suggests that the biological need for sleep increases during maturation. Consequently, the ability to effectively interact with peers while learning and processing novel information may be diminished in many sleepdeprived adolescents. Furthermore, sleep deprivation may account for reductions in cognitive efficiency in many children and adolescents with special education needs. In response to recognition of this potential problem by parents, educators, and scientists, some school districts have implemented delayed bus schedules and school start times to allow for increased sleep duration for high school students, in an effort to increase academic performance and decrease behavioral problems. The long-term effects of this change are yet to be determined; however, preliminary studies suggest that the short-term impact on learning and behavior has been beneficial. Thus, many parents, teachers, and scientists are supporting further consideration of this information to formulate policies that may maximize learning and developmental opportunities for children. Although changing school start times may be an effective method to combat sleep deprivation in most adolescents, some adolescents experience sleep deprivation and consequent diminished daytime performance because of common underlying sleep disorders (e.g., asthma or sleep apnea). In such cases, surgical, pharmaceutical, or respiratory therapy, or a combination of the three, interventions are required to restore normal sleep and daytime performance.
Magnetic Treatment of Irrigation Water and its Effect on Water Salinity
Abstract. The influence of magnetic field on the structure of water and aqueous solutions are similar and can alter the physical and chemical properties of water-dispersed systems. With the application of magnetic field, hydration of salt ions and other impurities slides down and improve the possible technological characteristics of the water. Magnetic field can enhance the characteristic of water i.e. better salt solubility, kinetic changes in salt crystallization, accelerated coagulation, etc. Gulf countries are facing critical problem due to depletion of water resources and increasing food demands to cover the human needs; therefore water shortage is being increasingly accepted as a major limitation for increased agricultural production and food security. In arid and semiarid regions sustainable agricultural development is influenced to a great extent by water quality that might be used economically and effectively in developing agriculture programs. In the present study, the possibility of using magnetized water to desalinate the soil is accounted for the enhanced dissolving capacity of the magnetized water. Magnetic field has been applied to treat brackish water. The study showed that the impact of magnetic field on saline water is sustained up to three hours (with and without shaking). These results suggest that even low magnetic field can decrease the electrical conductivity and total dissolved solids which are good for the removal of salinity from the irrigated land by using magnetized water.
Interpretable Representation Learning for Healthcare via Capturing Disease Progression through Time
Various deep learning models have recently been applied to predictive modeling of Electronic Health Records (EHR). In medical claims data, which is a particular type of EHR data, each patient is represented as a sequence of temporally ordered irregularly sampled visits to health providers, where each visit is recorded as an unordered set of medical codes specifying patient's diagnosis and treatment provided during the visit. Based on the observation that different patient conditions have different temporal progression patterns, in this paper we propose a novel interpretable deep learning model, called Timeline. The main novelty of Timeline is that it has a mechanism that learns time decay factors for every medical code. This allows the Timeline to learn that chronic conditions have a longer lasting impact on future visits than acute conditions. Timeline also has an attention mechanism that improves vector embeddings of visits. By analyzing the attention weights and disease progression functions of Timeline, it is possible to interpret the predictions and understand how risks of future visits change over time. We evaluated Timeline on two large-scale real world data sets. The specific task was to predict what is the primary diagnosis category for the next hospital visit given previous visits. Our results show that Timeline has higher accuracy than the state of the art deep learning models based on RNN. In addition, we demonstrate that time decay factors and attentions learned by Timeline are in accord with the medical knowledge and that Timeline can provide a useful insight into its predictions.
Deep Representation Learning for Human Motion Prediction and Classification
Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction.
Deep brain stimulation of the subthalamic nucleus for the treatment of Parkinson's disease
High-frequency deep brain stimulation (DBS) of the subthalamic nucleus (STN-HFS) is the preferred surgical treatment for advanced Parkinson's disease. In the 15 years since its introduction into clinical practice, many studies have reported on its benefits, drawbacks, and insufficiencies. Despite limited evidence-based data, STN-HFS has been shown to be surgically safe, and improvements in dopaminergic drug-sensitive symptoms and reductions in subsequent drug dose and dyskinesias are well documented. However, the procedure is associated with adverse effects, mainly neurocognitive, and with side-effects created by spread of stimulation to surrounding structures, depending on the precise location of electrodes. Quality of life improves substantially, inducing sudden global changes in patients' lives, often requiring societal readaptation. STN-HFS is a powerful method that is currently unchallenged in the management of Parkinson's disease, but its long-term effects must be thoroughly assessed. Further improvements, through basic research and methodological innovations, should make it applicable to earlier stages of the disease and increase its availability to patients in developing countries.
Intrusion Detection with Neural Networks
With the rapid expansion of computer networks during the past few years, security has become a crucial issue for modern computer systems. A good way to detect illegitimate use is through monitoring unusual user activity. Methods of intrusion detection based on hand-coded rule sets or predicting commands on-line are laborous to build or not very reliable. This paper proposes a new way of applying neural networks to detect intrusions. We believe that a user leaves a ’print’ when using the system; a neural network can be used to learn this print and identify each user much like detectives use thumbprints to place people at crime scenes. If a user’s behavior does not match his/her print, the system administrator can be alerted of a possible security breech. A backpropagation neural network called NNID (Neural Network Intrusion Detector) was trained in the identification task and tested experimentally on a system of 10 users. The system was 96% accurate in detecting unusual activity, with 7% false alarm rate. These results suggest that learning user profiles is an effective way for detecting intrusions.
All-Solid-State Repetitive Pulsed-Power Generator Using IGBT and Magnetic Compression Switches
By utilizing power semiconductor switches, especially high-voltage insulated gate bipolar transistors (IGBTs), as main switches, Marx modulators have demonstrated many advantages such as variable pulse length and pulse-repetition frequency, snubberless operation, and inherent redundancy. However, the relatively slow turn-on speed of the IGBT influences the pulse rise time of the Marx modulator. In this paper, a newly developed all-solid-state pulsed-power generator is proposed. This generator consists of a Marx modulator based on discrete IGBTs and a magnetic pulse-sharpening circuit, which is employed to compress the rising edge of the Marx output pulse. The experimental results are presented with a maximum voltage of 20 kV, a rise time of 200 ns, a pulse duration of 500 ns (full-width at half-maximum), and a repetition rate of 5 kHz on a 285- resistive load. The output power of the generator is 2.5 kW, and the average power in one pulse is 1 MW. The design methods of the IGBT drive circuits and the parameter calculation of the magnetic pulse-sharpening circuit are introduced in detail in this paper. The factors influencing the performance of the generator are also discussed.
Angiotensin-converting enzyme inhibitor trandolapril does not affect C-reactive protein levels in myocardial infarction patients.
Trandolapril Does Not Affect C-Reactive Protein Levels in Myocardial Infarction Patients To the Editor: Recently, Wang et al1 reported a direct effect of C-reactive protein (CRP) on the angiotensin system. Their findings add to the accumulating evidence that CRP is not only an indirect risk marker for cardiovascular disease but also causally contributes to the precipitation of the disease. An association between CRP and the angiotensin system also raises the question of what the effect is of angiotensin-converting enzyme (ACE) inhibitors on CRP, given that the ACE inhibitors are often used after coronary events. We studied the effect of the ACE inhibitor, trandolapril, on CRP levels in 80 patients who participated in the TRAndolapril Cardiac Evaluation (TRACE) study,2 randomized to treatment (0.5 mg daily) or placebo. Citrated blood was collected from these patients during hospitalization after a myocardial infarction but before randomization, and after 1, 3, 6, 9, and 12 months of treatment. CRP was measured in the plasma using a highsensitivity enzyme immunoassay with polyclonal antibodies to human CRP as catching and tagging antibodies (Dako). The effect of treatment on CRP was analyzed using repeatedmeasures ANOVA on logarithmically transformed CRP levels. CRP levels were elevated after the myocardial infarction (geometric mean at randomization, 34.1 mg/L [coefficient of variation {CV}, 1.36]) and had returned to normal levels after 1 month, both in the treated group and in the placebo group (2.50 mg/L [CV 1.20] and 3.20 mg/L [CV 1.53], respectively; NS). The levels were not significantly different at any time point during the treatment period. The results of our study suggest that the association between CRP and the angiotensin system does not include lowering of CRP by ACE inhibitors but may be limited to an effect of CRP on angiotensin I receptor expression as reported by Wang et al.1 In conclusion, CRP levels after a myocardial infarction and the beneficial effect of ACE-inhibitor treatment of patients after a myocardial infarction do not seem to include a reduction of levels of the inflammatory marker CRP.
An evaluation of the usefulness of two terminology models for integrating nursing diagnosis concepts into SNOMED Clinical Terms®
OBJECTIVES We evaluated the usefulness of two models for integrating nursing diagnosis concepts into SNOMED Clinical Terms (CT). METHODS First, we dissected nursing diagnosis term phrases from two source terminologies (North American Nursing Diagnosis Association Taxonomy 1 (NANDA) and Omaha System) into the semantic categories of the European Committee for Standardization (CEN) categorical structure and ISO reference terminology model (RTM). Second, we critically analyzed the similarities between the semantic links in the CEN and ISO models and the semantic links used to formally define diagnostic concepts in SNOMED CT. RESULTS Our findings demonstrated that focus, bearer/subject of information, and judgment were present in 100% of the NANDA and Omaha term phrases. The Omaha term phrases contained no additional descriptors beyond those considered mandatory in the CEN and ISO models. The comparison among the semantic links showed that SNOMED CT currently contains all but one of the semantic links needed to model the two source terminologies for integration. In conclusion, our findings support the potential utility of the CEN and ISO models for integrating nursing diagnostic concepts into SNOMED CT.
Personality as a predictor of Business Social Media Usage: an Empirical Investigation of Xing Usage Patterns
Referring to recent research calls regarding the role of individual differences on technology adoption and use, this paper reports on an empirical investigation of the influence of a user’s personality on the usage of the European career-oriented social network XING and its usage intensity (n = 760). Using structural equation modeling, a significant influence of personality on the intensity of XING usage was found (R2 = 12.4%,α = 0.758). More specifically, results indicated the major role played by the personality traits Extraversion, Emotional Stability and Openness to Experience as proper predictors for XING usage. Contrary to prior research on private-oriented social media, I discovered a significant positive Emotional Stability–XING usage intensity relationship instead of a negative relationship which is explained by Goffman’s Self Presentation Theory.
Semi-supervised Stacked Label Consistent Autoencoder for Reconstruction and Analysis of Biomedical Signals
Objective: An autoencoder-based framework that simultaneously reconstruct and classify biomedical signals is proposed. Previous work has treated reconstruction and classification as separate problems. This is the first study that proposes a combined framework to address the issue in a holistic fashion. Methods: For telemonitoring purposes, reconstruction techniques of biomedical signals are largely based on compressed sensing (CS); these are “designed” techniques where the reconstruction formulation is based on some “assumption” regarding the signal. In this study, we propose a new paradigm for reconstruction—the reconstruction is “learned,” using an autoencoder; it does not require any assumption regarding the signal as long as there is sufficiently large training data. But since the final goal is to analyze/classify the signal, the system can also learn a linear classification map that is added inside the autoencoder. The ensuing optimization problem is solved using the Split Bregman technique. Results: Experiments were carried out on reconstructing and classifying electrocardiogram (ECG) (arrhythmia classification) and EEG (seizure classification) signals. Conclusion: Our proposed tool is capable of operating in a semi-supervised fashion. We show that our proposed method is better in reconstruction and more than an order magnitude faster than CS based methods; it is capable of real-time operation. Our method also yields better results than recently proposed classification methods. Significance: This is the first study offering an alternative to CS-based reconstruction. It also shows that the representation learning approach can yield better results than traditional methods that use hand-crafted features for signal analysis.
Multiple Vehicle Detection and Tracking from Surveillance Camera with Collision Prediction
This paper describes a system for detection and tracking of multiple vehicles from a surveillance camera with collision detection and prediction. Accurate vehicles’ contour is obtained in the detection phase, and object centroids are calculated. Each detected vehicle is assigned to the specific lane and tracked separately. Calculated centroids are used for object tracking using a contour-based algorithm with movement prediction which provides sufficient amount of information to predict vehicle movement. A rectangle constructed around the ground part of the vehicle is used for vehicle collision prediction. Experimental results show a success rate of 71 % when constructing the ground rectangles, which are key for collision prediction.
A One-Stage Correction of the Blepharophimosis Syndrome Using a Standard Combination of Surgical Techniques
The aim of this study was to evaluate the efficacy of a one-stage treatment for the blepharophimosis-ptosis-epicanthus inversus syndrome (BPES) using a combination of standard surgical techniques. This is a retrospective interventional case series study of 21 BPES patients with a 1-year minimum follow-up period. The one-stage intervention combined three different surgical procedures in the following order: Z-epicanthoplasty for the epicanthus, transnasal wiring of the medial canthal ligaments for the telecanthus, and a bilateral fascia lata sling for ptosis correction. Preoperative and postoperative measurements of the horizontal lid fissure length (HFL), vertical lid fissure width (VFW), nasal intercanthal distance (ICD), and the ratio between the intercanthal distance and the horizontal fissure length (ICD/HFL) were analyzed using Student’s t test for paired variables. The mean preoperative measurements were 4.95 ± 1.13 mm for the VFW, 20.90 ± 2.14 mm for the HFL, 42.45 ± 2.19 mm for the ICD, and 2.04 ± 0.14 mm for the ICD/HFL ratio. The mean postoperative measurements were 7.93 ± 1.02 mm for the VFW, 26.36 ± 1.40 mm for the HFL, 32.07 ± 1.96 mm for the ICD, and 1.23 ± 0.09 mm for the ICD/HFL ratio. All these values and their differences were statistically significant (P < 0.0001). All of the patients developed symmetric postoperative inferior version lagophthalmus, a complication that tended to decrease over time. One-stage correction of BPES is safe and efficient with the surgical techniques described.
A Review on Educational Robotics as Assistive Tools For Learning Mathematics and Science
Robots have become more common in our society as it penetrates the education system as well as in industrial area. More researches have been done on robotics and its application in education area. Are the usage of robots in teaching and learning actually work and effective in Malaysian context? What is the importance of educational robotics in education and what skills will be sharpened in using robotics in education? As programming is vital in educational robotics another issues arise – which programming is suitable for Malaysian schools and how to implement it among the students? As per whole discussion, a new robotic curriculum will be suggested. This paper present a review on educational robotics, its advantages to educational fields, the hardware design and the common programming software used which can be implemented among Malaysian students. The results from the overview will help to spark the interest to not only researchers in the field of human–robot interaction but also administration in educational institutes who wish to understand the wider implications of adopting robots in education.
Design of an arm exoskeleton with scapula motion for shoulder rehabilitation
The evolution of an arm exoskeleton design for treating shoulder pathology is examined. Tradeoffs between various kinematics configurations are explored, and a device with five active degrees of freedom is proposed. Two rapid-prototype designs were built and fitted to several subjects to verify the kinematic design and determine passive link adjustments. Control modes are developed for exercise therapy and functional rehabilitation, and a distributed software architecture that incorporates computer safety monitoring is described. Although intended primarily for therapy, the exoskeleton is also used to monitor progress in strength, range of motion, and functional task performance
Formation and Stability of the Gaseous Species LiAlCl4, Li2AlCl5 and LiAl2Cl7 – Mass Spectrometric and Quantum Chemical Studies
The formation of the gaseous species LiCl, Li2Cl2, AlCl3 and LiAlCl4 was shown by mass spectrometric studies of the reaction of solid LiCl with gaseous AlCl3 at 575 °C. Besides AlCl3 and Al2Cl6, the gas complexes LiAlCl4, Li2AlCl5 and LiAl2Cl7 were formed during the evaporation of liquefied LiAlCl4. The structures of the molecules under discussion were computed by quantum chemical DFT studies. Thermodynamic data of these molecules were determined by experimental methods (mass spectrometry), and the results were confirmed by theoretical calculations. (© Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2008)
Auditing Anti-Malware Tools by Evolving Android Malware and Dynamic Loading Technique
Although a previous paper shows that existing anti-malware tools (AMTs) may have high detection rate, the report is based on existing malware and thus it does not imply that AMTs can effectively deal with future malware. It is desirable to have an alternative way of auditing AMTs. In our previous paper, we use malware samples from android malware collection Genome to summarize a malware meta-model for modularizing the common attack behaviors and evasion techniques in reusable features. We then combine different features with an evolutionary algorithm, in which way we evolve malware for variants. Previous results have shown that the existing AMTs only exhibit detection rate of 20%–30% for 10 000 evolved malware variants. In this paper, based on the modularized attack features, we apply the dynamic code generation and loading techniques to produce malware, so that we can audit the AMTs at runtime. We implement our approach, named Mystique-S, as a service-oriented malware generation system. Mystique-S automatically selects attack features under various user scenarios and delivers the corresponding malicious payloads at runtime. Relying on dynamic code binding (via service) and loading (via reflection) techniques, Mystique-S enables dynamic execution of payloads on user devices at runtime. Experimental results on real-world devices show that existing AMTs are incapable of detecting most of our generated malware. Last, we propose the enhancements for existing AMTs.
Common polymorphisms of the PPAR-γ2 (Pro12Ala) and PGC-1α (Gly482Ser) genes are associated with the conversion from impaired glucose tolerance to type 2 diabetes in the STOP-NIDDM trial
We investigated the effects of the common polymorphisms in the peroxisome proliferator-activated receptor γ2 (PPAR-γ2; Pro12Ala) and in PPAR-γ coactivator 1α (PGC-1α; Gly482Ser) genes on the conversion from impaired glucose tolerance to type 2 diabetes in participants in the STOP-NIDDM trial. This trial aimed to study the effect of acarbose in the prevention of type 2 diabetes. Genotyping was performed in 770 study subjects whose DNA was available. The Gly482Ser variant in the PGC-1α gene was determined with the polymerase chain reaction amplification, Hpa II enzyme digestion, and gel electrophoresis. The Pro12Ala polymorphism of the PPAR-γ2 gene was determined by the polymerase chain reaction–single-strand conformation polymorphism analysis. The Pro12Pro genotype of the PPAR-γ2 gene predicted the conversion to diabetes in women in the acarbose group (odds ratio 2.89, 95% CI 1.20 to 6.96; p=0.018). The 482Ser allele of the PGC-1α gene had a significant interaction with the mode of treatment (p=0.012), and in the placebo group the 482Ser allele was associated with a 1.6-fold higher risk for type 2 diabetes compared to the Gly482Gly genotype (95% CI 1.06 to 2.33; p=0.023). Acarbose prevented the development of diabetes independently of the genotype of the PPAR-γ2 gene, but only the carriers of the 482Ser allele of the PGC-1α gene were responsive to acarbose treatment. We conclude that the Pro12Pro genotype of the PPAR-γ2 gene and the 482Ser allele of the PGC-1α gene are associated with the conversion from impaired glucose tolerance to type 2 diabetes in the STOP-NIDDM trial.
Learning Partially Contracting Dynamical Systems from Demonstrations
An algorithm for learning the dynamics of point-to-point motions from demonstrations using an autonomous nonlinear dynamical system, named contracting dynamical system primitives (CDSP), is presented. The motion dynamics are approximated using a Gaussian mixture model (GMM) and its parameters are learned subject to constraints derived from partial contraction analysis. Systems learned using the proposed method generate trajectories that accurately reproduce the demonstrations and are guaranteed to converge to a desired goal location. Additionally, the learned models are capable of quickly and appropriately adapting to unexpected spatial perturbations and changes in goal location during reproductions. The CDSP algorithm is evaluated on shapes from a publicly available human handwriting dataset and also compared with two state-of-the-art motion generation algorithms. Furthermore, the CDSP algorithm is also shown to be capable of learning and reproducing point-to-point motions directly from real-world demonstrations using a Baxter robot.
Generic Application-Level Protocol Analyzer and its Language
The Shield project relied on application protocol analyzers to detect potential exploits of application vulnerabilities. We present the design of a second-generation generic application-level protocol analyzer (GAPA) that encompasses a domain-specific language and the associated run-time. We designed GAPA to satisfy three important goals: safety, real-time analysis and response, and rapid development of analyzers. We have found that these goals are relevant for many network monitors that implement protocol analysis. Therefore, we built GAPA to be readily integrated into tools such as Ethereal as well as Shield. GAPA preserves safety through the use of a memorysafe language for both message parsing and analysis, and through various techniques to reduce the amount of state maintained in order to avoid denial-of-service attacks. To support online analysis, the GAPA runtime uses a streamprocessing model with incremental parsing. In order to speed protocol development, GAPA uses a syntax similar to many protocol RFCs and other specifications, and incorporates many common protocol analysis tasks as built-in abstractions. We have specified 10 commonly used protocols in the GAPA language and found it expressive and easy to use. We measured our GAPA prototype and found that it can handle an enterprise client HTTP workload at up to 60 Mbps, sufficient performance for many end-host firewall/IDS scenarios. At the same time, the trusted code base of GAPA is an order of magnitude smaller than Ethereal.
Quantification of mRNA using real-time reverse transcription PCR (RT-PCR): trends and problems.
The fluorescence-based real-time reverse transcription PCR (RT-PCR) is widely used for the quantification of steady-state mRNA levels and is a critical tool for basic research, molecular medicine and biotechnology. Assays are easy to perform, capable of high throughput, and can combine high sensitivity with reliable specificity. The technology is evolving rapidly with the introduction of new enzymes, chemistries and instrumentation. However, while real-time RT-PCR addresses many of the difficulties inherent in conventional RT-PCR, it has become increasingly clear that it engenders new problems that require urgent attention. Therefore, in addition to providing a snapshot of the state-of-the-art in real-time RT-PCR, this review has an additional aim: it will describe and discuss critically some of the problems associated with interpreting results that are numerical and lend themselves to statistical analysis, yet whose accuracy is significantly affected by reagent and operator variability.
Tripartite model of anxiety and depression: psychometric evidence and taxonomic implications.
We review psychometric and other evidence relevant to mixed anxiety-depression. Properties of anxiety and depression measures, including the convergent and discriminant validity of self- and clinical ratings, and interrater reliability, are examined in patient and normal samples. Results suggest that anxiety and depression can be reliably and validly assessed; moreover, although these disorders share a substantial component of general affective distress, they can be differentiated on the basis of factors specific to each syndrome. We also review evidence for these specific factors, examining the influence of context and scale content on ratings, factor analytic studies, and the role of low positive affect in depression. With these data, we argue for a tripartite structure consisting of general distress, physiological hyperarousal (specific anxiety), and anhedonia (specific depression), and we propose a diagnosis of mixed anxiety-depression.
The pattern of facial skeletal growth and its relationship to various common indexes of maturation.
INTRODUCTION Sequential stages in the development of the hand, wrist, and cervical vertebrae commonly are used to assess maturation and predict the timing of the adolescent growth spurt. This approach is predicated on the idea that forecasts based on skeletal age must, of necessity, be superior to those based on chronologic age. This study was undertaken to test this reasonable, albeit largely unproved, assumption in a large, longitudinal sample. METHODS Serial records of 100 children (50 girls, 50 boys) were chosen from the files of the Bolton-Brush Growth Study Center in Cleveland, Ohio. The 100 series were 6 to 11 years in length, a span that was designed to encompass the onset and the peak of the adolescent facial growth spurt in each subject. Five linear cephalometric measurements (S-Na, Na-Me, PNS-A, S-Go, Go-Pog) were summed to characterize general facial size; a sixth (Co-Gn) was used to assess mandibular length. In all, 864 cephalograms were traced and analyzed. For most years, chronologic age, height, and hand-wrist films were available, thereby permitting various alternative methods of maturational assessment and prediction to be tested. The hand-wrist and the cervical vertebrae films for each time point were staged. Yearly increments of growth for stature, face, and mandible were calculated and plotted against chronologic age. For each subject, the actual age at onset and peak for stature and facial and mandibular size served as the gold standards against which key ages inferred from other methods could be compared. RESULTS On average, the onset of the pubertal growth spurts in height, facial size, and mandibular length occurred in girls at 9.3, 9.8, and 9.5 years, respectively. The difference in timing between height and facial size growth spurts was statistically significant. In boys, the onset for height, facial size, and mandibular length occurred more or less simultaneously at 11.9, 12.0, and 11.9 years, respectively. In girls, the peak of the growth spurt in height, facial size, and mandibular length occurred at 10.9, 11.5, and 11.5 years. Height peaked significantly earlier than both facial size and mandibular length. In boys, the peak in height occurred slightly (but statistically significantly) earlier than did the peaks in the face and mandible: 14.0, 14.4, and 14.3 years. Based on rankings, the hand-wrist stages provided the best indication (lowest root mean squared error) that maturation had advanced to the peak velocity stage. Chronologic age, however, was nearly as good, whereas the vertebral stages were consistently the worst. Errors from the use of statural onset to predict the peak of the pubertal growth spurt in height, facial size, and mandibular length were uniformly lower than for predictions based on the cervical vertebrae. Chronologic age, especially in boys, was a close second. CONCLUSIONS The common assumption that onset and peak occur at ages 12 and 14 years in boys and 10 and 12 years in girls seems correct for boys, but it is 6 months to 1 year late for girls. As an index of maturation, hand-wrist skeletal ages appear to offer the best indication that peak growth velocity has been reached. Of the methods tested here for the prediction of the timing of peak velocity, statural onset had the lowest errors. Although mean chronologic ages were nearly as good, stature can be measured repeatedly and thus might lead to improved prediction of the timing of the adolescent growth spurt.
Control of risk factors in and treatment of patients with coronary heart disease: the TRECE study.
The aim of the TRECE study was to describe treatment in patients with coronary heart disease (CHD). It was an observational, cross-sectional multicenter study of patients who were treated in either an internal medicine (n=50) or cardiology (n=50) department or in primary care (n=100) during 2006. The patients' history, risk factors and treatment were recorded, and noncardiac disease was evaluated using the Charlson index. Optimal medical treatment (OMT) was regarded as comprising combined administration of antiplatelet agents, statins, beta-blockers, and renin-angiotensin-aldosterone system blockers. In total, data on 2897 patients were analyzed; their mean age was 67.4 years and 71.5% were male. Overall, 25.9% (95% confidence interval, 25.6-26.2%) received OMT. Multivariate analysis showed that prescription of OMT was independently associated with hypertension, diabetes, current smoking, myocardial infarction and angina. In contrast, nonprescription of OMT was associated with atrial fibrillation, chronic obstructive pulmonary disease and a Charlson index>/=4. The main findings were that few CHD patients were prescribed OMT and that its prescription was determined by the presence of symptoms and comorbid conditions.
Feature Learning for Interaction Activity Recognition in RGBD Videos
This paper proposes a human activity recognition method which is based on features learned from 3D video data without incorporating domain knowledge. The experiments on data collected by RGBD cameras produce results outperforming other techniques. Our feature encoding method follows the bag-of-visual-word model, then we use a SVM classifier to recognise the activities. We do not use skeleton or tracking information and the same technique is applied on color and depth data.
SECURITY CHALLENGES , ISSUES AND THEIR SOLUTIONS FOR VANET
Vehicular Ad hoc Networks (VANETs) are the promising approach to provide safety and other applications to the drivers as well as passengers. It becomes a key component of the intelligent transport system. A lot of works have been done towards it but security in VANET got less attention. In this article, we have discussed about the VANET and its technical and security challenges. We have also discussed some major attacks and solutions that can be implemented against these attacks. We have compared the solution using different parameters. Lastly we have discussed the mechanisms that are used in the solutions.
The development and malleability of executive control abilities
Executive control (EC) generally refers to the regulation of mental activity. It plays a crucial role in complex cognition, and EC skills predict high-level abilities including language processing, memory, and problem solving, as well as practically relevant outcomes such as scholastic achievement. EC develops relatively late in ontogeny, and many sub-groups of developmental populations demonstrate an exaggeratedly poor ability to control cognition even alongside the normal protracted growth of EC skills. Given the value of EC to human performance, researchers have sought means to improve it through targeted training; indeed, accumulating evidence suggests that regulatory processes are malleable through experience and practice. Nonetheless, there is a need to understand both whether specific populations might particularly benefit from training, and what cortical mechanisms engage during performance of the tasks used in the training protocols. This contribution has two parts: in Part I, we review EC development and intervention work in select populations. Although promising, the mixed results in this early field make it difficult to draw strong conclusions. To guide future studies, in Part II, we discuss training studies that have included a neuroimaging component - a relatively new enterprise that also has not yet yielded a consistent pattern of results post-training, preventing broad conclusions. We therefore suggest that recent developments in neuroimaging (e.g., multivariate and connectivity approaches) may be useful to advance our understanding of the neural mechanisms underlying the malleability of EC and brain plasticity. In conjunction with behavioral data, these methods may further inform our understanding of the brain-behavior relationship and the extent to which EC is dynamic and malleable, guiding the development of future, targeted interventions to promote executive functioning in both healthy and atypical populations.
Estimating the ratio of CD4+ to CD8+ T cells using high-throughput sequence data.
Mature T cells express either CD8 or CD4, defining two physiologically distinct populations of T cells. CD8+ T cells, or killer T-cells, and CD4+ T cells, or helper T cells, effect different aspects of T cell mediated adaptive immunity. Currently, determining the ratio of CD4+ to CD8+ T cells requires flow cytometry or immunohistochemistry. The genomic T cell receptor locus is rearranged during T cell maturation, generating a highly variable T cell receptor locus in each mature T cell. As part of thymic maturation, T cells that will become CD4+ versus CD8+ are subjected to different selective pressures. In this study, we apply high-throughput next-generation sequencing to T cells from both a healthy cohort and a cohort with an autoimmune disease (multiple sclerosis) to identify sequence features in the variable CDR3 region of the rearranged T cell receptor gene that distinguish CD4+ from CD8+ T cells. We identify sequence features that differ between CD4+ and CD8+ T cells, including Variable gene usage and CDR3 region length. We implement a likelihood model to estimate relative proportions of CD4+ and CD8+ T cells using these features. Our model accurately estimates the proportion of CD4+ and CD8+ T cell sequences in samples from healthy and diseased immune systems, and simulations indicate that it can be applied to as few as 1000 T cell receptor sequences; we validate this model using in vitro mixtures of T cell sequences, and by comparing the results of our method to flow cytometry using peripheral blood samples. We believe our computational method for determining the CD4:CD8 ratio in T cell samples from sequence data will provide additional useful information for any samples on which high-throughput TCR sequencing is performed, potentially including some solid tumors.
A fuzzy logic approach for timely adaptive traffic light based on traffic load
In this research, an adaptive timely traffic light is proposed as solution for congestion in typical area in Indonesia. Makassar City, particularly in the most complex junction (fly over, Pettarani, Reformasi highway and Urip S.) is observed for months using static cameras. The condition is mapped into fuzzy logic to have a better time transition of traffic light as opposed to the current conventional traffic light system. In preliminary result, fuzzy logic shows significant number of potential reduced in congestion. Each traffic line has 20-30% less congestion with future implementation of the proposed system.
Building A Baby
We show how an agent can acquire conceptual knowledge by sensorimotor interaction with its environment. The method has much in common with the notion of image-schemas, which are central to Mandler’s theory of conceptual development. We show that Mandler’s approach is feasible in an artificial agent.
Photon angular momentum based multidimensional quantum key distribution
In this invited paper, we describe the photon angular momentum based entanglement assisted quantum key distribution (QKD) protocols, analyse their security, and determine the secure finite key fraction rates. Two types of entanglement-assisted protocols, namely two-basis and (D+1)-basis protocols (D is dimensionality of corresponding Hilbert space), have been identified as the most promising. We further analyse the implementation of the qubit gates required for these protocols. We demonstrate that photon angular momentum based QKD protocols significantly outperform corresponding two-dimensional QKD protocols in terms of the finite secret key rates.
Optimized Product Quantization for Approximate Nearest Neighbor Search
Product quantization is an effective vector quantization approach to compactly encode high-dimensional vectors for fast approximate nearest neighbor (ANN) search. The essence of product quantization is to decompose the original high-dimensional space into the Cartesian product of a finite number of low-dimensional subspaces that are then quantized separately. Optimal space decomposition is important for the performance of ANN search, but still remains unaddressed. In this paper, we optimize product quantization by minimizing quantization distortions w.r.t. the space decomposition and the quantization codebooks. We present two novel methods for optimization: a non-parametric method that alternatively solves two smaller sub-problems, and a parametric method that is guaranteed to achieve the optimal solution if the input data follows some Gaussian distribution. We show by experiments that our optimized approach substantially improves the accuracy of product quantization for ANN search.
Study on flip chip assembly of high density micro-LED array
Flip chip assembly technology is an attractive solution for high I/O density and fine-pitch microelectronics packaging. Recently, high efficient GaN-based light-emitting diodes (LEDs) have undergone a rapid development and flip chip bonding has been widely applied to fabricate high-brightness GaN micro-LED arrays [1]. The flip chip GaN LED has some advantages over the traditional top-emission LED, including improved current spreading, higher light extraction efficiency, better thermal dissipation capability and the potential of further optical component integration [2, 3]. With the advantages of flip chip assembly, micro-LED (μLED) arrays with high I/O density can be performed with improved luminous efficiency than conventional p-side-up micro-LED arrays and are suitable for many potential applications, such as micro-displays, bio-photonics and visible light communications (VLC), etc. In particular, μLED array based selif-emissive micro-display has the promising to achieve high brightness and contrast, reliability, long-life and compactness, which conventional micro-displays like LCD, OLED, etc, cannot compete with. In this study, GaN micro-LED array device with flip chip assembly package process was presented. The bonding quality of flip chip high density micro-LED array is tested by daisy chain test. The p-n junction tests of the devices are measured for electrical characteristics. The illumination condition of each micro-diode pixel was examined under a forward bias. Failure mode analysis was performed using cross sectioning and scanning electron microscopy (SEM). Finally, the fully packaged micro-LED array device is demonstrated as a prototype of dice projector system.
Semantic and Verbatim Word Spotting Using Deep Neural Networks
In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform word spotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.
Exploiting Semantic Information and Deep Matching for Optical Flow
Research • TorontoCity: Seeing the World with a Million Eyes. S. Wang, M. Bai, G. Mattyus, H. Chu, W. Luo, B. Yang, J. Liang, J. Cheverie, S. Fidler, R. Urtasun. ICCV 2017 Spotlight. • Deep Watershed Transformation for Instance Segmentation. M. Bai, R. Urtasun. CVPR 2017. • Exploiting Semantic Information and Deep Matching for Optical Flow. M. Bai∗, W. Luo∗, K. Kundu, R. Urtasun. ECCV 2016. ∗ denotes equal contributions.
Test format and corrective feedback modify the effect of testing on long-term retention
We investigated the effects of format of an initial test and whether or not students received corrective feedback on that test on a final test of retention 3 days later. In Experiment 1, subjects studied four short journal papers. Immediately after reading each paper, they received either a multiple choice (MC) test, a short answer (SA) test, a list of statements to read, or a filler task. The MC test, SA test, and list of statements tapped identical facts from the studied material. No feedback was provided during the initial tests. On a final test 3 days later (consisting of MC and SA questions), having had an intervening MC test led to better performance than an intervening SA test, but the intervening MC condition did not differ significantly from the read statements condition. To better equate exposure to test-relevant information, corrective feedback during the initial tests was introduced in Experiment 2. With feedback provided, having had an intervening SA test led to the best performance on the final test, suggesting that the more demanding the retrieval processes engendered by the intervening test, the greater the benefit to final retention. The practical application of these findings is that regular SA quizzes with feedback may be more effective in enhancing student learning than repeated presentation of target facts or taking an MC quiz.
Implementation of a split-bolus single-pass CT protocol at a UK major trauma centre to reduce excess radiation dose in trauma pan-CT.
AIM To quantify the dose reduction and ensure that the use of a split-bolus protocol provided sufficient vascular enhancement. MATERIALS AND METHODS Between 1 January 2014 and 31 May 2014, both split bolus and traditional two-phase scans were performed on a single CT scanner (SOMATOM Definition AS+, Siemens Healthcare) using a two-pump injector (Medrad Stellant). Both protocols used Siemens' proprietary tube current and tube voltage modulation techniques (CARE dose and CARE kV). The protocols were compared retrospectively to assess the dose-length product (DLP), aortic radiodensity at the level of the coeliac axis and radiodensity of the portal vein. RESULTS There were 151 trauma CT examinations during this period. Seventy-eight used the split-bolus protocol. Seventy-one had traditional two-phase imaging. One patient was excluded as they were under the age of 18 years. The radiodensity measurements for the portal vein were significantly higher (p<0.001) in the split-bolus protocol. The mean aortic enhancement in both protocols exceeded 250 HU, although the traditional two-phase protocol gave greater arterial enhancement (p<0.001) than the split-bolus protocol. The split-bolus protocol had a significantly lower (p<0.001) DLP with 43.5% reduction in the mean DLP compared to the traditional protocol. CONCLUSION Split-bolus CT imaging offers significant dose reduction for this relatively young population while retaining both arterial and venous enhancement.
Perioperative myocardial infarction/injury after noncardiac surgery.
Cardiovascular complications, particularly perioperative myocardial infarction/injury, seem to be major contributors to mortality after noncardiac surgery. With surgical procedures being very frequent (900 000/year in Switzerland), perioperative myocardial injury is common in everyday clinical practice. Over 80% of patients experiencing perioperative myocardial injury do not report symptoms. Therefore perioperative myocardial injury remains undiagnosed and untreated. Moreover, its silent presentation results in limited awareness among both clinicians and the public. Despite being largely asymptomatic, perioperative myocardial injury increases 30-day mortality nearly 10-fold. This review aims to increase the awareness of perioperative myocardial injury/infarction and give an overview of the emerging evidence, including pathophysiology, clinical presentation, prevention, and potential future treatments.
Complex network measures of brain connectivity: Uses and interpretations
Brain connectivity datasets comprise networks of brain regions connected by anatomical tracts or by functional associations. Complex network analysis-a new multidisciplinary approach to the study of complex systems-aims to characterize these brain networks with a small number of neurobiologically meaningful and easily computable measures. In this article, we discuss construction of brain networks from connectivity data and describe the most commonly used network measures of structural and functional connectivity. We describe measures that variously detect functional integration and segregation, quantify centrality of individual brain regions or pathways, characterize patterns of local anatomical circuitry, and test resilience of networks to insult. We discuss the issues surrounding comparison of structural and functional network connectivity, as well as comparison of networks across subjects. Finally, we describe a Matlab toolbox (http://www.brain-connectivity-toolbox.net) accompanying this article and containing a collection of complex network measures and large-scale neuroanatomical connectivity datasets.
Joint Extraction of Entities and Relations for Opinion Recognition
We present an approach for the joint extraction of entities and relations in the context of opinion recognition and analysis. We identify two types of opinion-related entities — expressions of opinions and sources of opinions — along with the linking relation that exists between them. Inspired by Roth and Yih (2004), we employ an integer linear programming approach to solve the joint opinion recognition task, and show that global, constraint-based inference can significantly boost the performance of both relation extraction and the extraction of opinion-related entities. Performance further improves when a semantic role labeling system is incorporated. The resulting system achieves F-measures of 79 and 69 for entity and relation extraction, respectively, improving substantially over prior results in the area.
Effect of alpha-lipoic acid on blood glucose, insulin resistance and glutathione peroxidase of type 2 diabetic patients.
OBJECTIVE To examine the effects of alpha-lipoic acid (ALA) treatment over a period of 2 months on fasting blood glucose (FBG), insulin resistance (IR), and glutathione peroxidase (GH-Px) activity in type 2 diabetes (T2DM) patients. METHODS This study took place in Motahari Clinic, Shiraz, Iran, which is affiliated to Shiraz University of Medical Sciences from May to October 2006. Type 2 DM patients (n=57) were divided into 2 groups to receive either ALA (300 mg daily) or placebo by systematic randomization, and were followed-up for 8 weeks. After an overnight fasting and 2 hours after breakfast, patients' blood samples were drawn and tested for FBG, 2 hours PPG, serum insulin level, and GH-Px activity. RESULTS The result of the study showed a significant decrease in FBG and PPG levels, IR-Homeostasis Model Assessment (IR-HOMA index) and GH-Px level in the ALA group. The comparison of differences between FBG and IR at the beginning and at the end of study in the ALA treated group and the placebo group were also significant. CONCLUSION This study supports the use of ALA as an antioxidant in the care of diabetic patients.
Review: mesenchymal stem cells: cell-based reconstructive therapy in orthopedics.
Adult stem cells provide replacement and repair descendants for normal turnover or injured tissues. These cells have been isolated and expanded in culture, and their use for therapeutic strategies requires technologies not yet perfected. In the 1970s, the embryonic chick limb bud mesenchymal cell culture system provided data on the differentiation of cartilage, bone, and muscle. In the 1980s, we used this limb bud cell system as an assay for the purification of inductive factors in bone. In the 1990s, we used the expertise gained with embryonic mesenchymal progenitor cells in culture to develop the technology for isolating, expanding, and preserving the stem cell capacity of adult bone marrow-derived mesenchymal stem cells (MSCs). The 1990s brought us into the new field of tissue engineering, where we used MSCs with site-specific delivery vehicles to repair cartilage, bone, tendon, marrow stroma, muscle, and other connective tissues. In the beginning of the 21st century, we have made substantial advances: the most important is the development of a cell-coating technology, called painting, that allows us to introduce informational proteins to the outer surface of cells. These paints can serve as targeting addresses to specifically dock MSCs or other reparative cells to unique tissue addresses. The scientific and clinical challenge remains: to perfect cell-based tissue-engineering protocols to utilize the body's own rejuvenation capabilities by managing surgical implantations of scaffolds, bioactive factors, and reparative cells to regenerate damaged or diseased skeletal tissues.
Effect of neoadjuvant cetuximab, capecitabine, and radiotherapy for locally advanced rectal cancer: results of a phase II study
The aim of this study was to investigate the efficacy and safety of neoadjuvant cetuximab, capecitabine, and radiotherapy for patients with locally advanced rectal cancer. Sixty-three eligible patients were selectively enrolled in this study. Neoadjuvant treatment consisted of cetuximab and capecitabine for 6 weeks and radiotherapy for 5 weeks. Surgical resection was performed 6–8 weeks after the completion of neoadjuvant treatment. KRAS mutation statuses were analyzed retrospectively after the cetuximab treatment. All the patients underwent a standardized postoperative follow-up for at least 3 years. A pathological complete response (pCR) was achieved in eight patients (12.7 %). Overall down-staging was found in 49 patients (77.8 %). The 3-year disease-free survival (DFS) rate and overall survival (OS) rate was 76.2 % and 81.0 %, respectively. The most common adverse events during neoadjuvant treatment were acneiform skin rash (82.5 %), radiodermatitis (46.0 %), and diarrhea (36.5 %). KRAS mutations were detected in 19 of 63 (31.2 %) tumors. The down-staging rate in patients with KRAS wild-type (WT) was significantly higher than patients with KRAS mutation (P = 0.020). There was no significant difference in the pCR rate, 3-year DFS rate or 3-year OS rate between KRAS WT patients and KRAS-mutated patients. Neoadjuvant treatment with cetuximab and capecitabine-based chemoradiotherapy is safe and well tolerated. The pCR rate, 3-year DFS rate and OS rate are not superior to the rate of neoadjuvant chemoradiotherapy using two or more cytotoxic agents. The KRAS WT is highly associated with tumor down-staging to cetuximab plus capecitabine-based CRT in patients with LARC.
The Sources and Consequences of Mobile Technostress in the Workplace
In this study, we explore the phenomenon of mobile technostress: stress experienced by users of mobile information and communication technologies. We examine the impacts of mobile technostress on individuals’ job satisfaction. Based on the Transaction Based Model of stress and the existing literature on technostress, a conceptual model was proposed to understand this phenomenon. Two sources of mobile technostress have been identified: techno-overload and techno-insecurity. We hypothesize that techno-overload and techno-insecurity exert a negative impact on job satisfaction. The individual level mobile technostress inhibitors (i.e., self-efficacy) are identified as helping individuals reduce stress. We also hypothesize that self-efficacy has a positive impact on job satisfaction. Furthermore, the moderator effects of habit are also explored. We hypothesize that habit will negatively moderate the relationship between mobile technostress creators and job satisfaction, and positively moderate the relationship between mobile technostress inhibitors and job satisfaction. The methodological design as well as potential theoretical and practical implications has also been discussed.
A tandem simulation framework for predicting mapping quality
Read alignment is the first step in most sequencing data analyses. Because a read’s point of origin can be ambiguous, aligners report a mapping quality, which is the probability that the reported alignment is incorrect. Despite its importance, there is no established and general method for calculating mapping quality. I describe a framework for predicting mapping qualities that works by simulating a set of tandem reads. These are like the input reads in important ways, but the true point of origin is known. I implement this method in an accurate and low-overhead tool called Qtip, which is compatible with popular aligners.
Accurate Optical Flow via Direct Cost Volume Processing
We present an optical flow estimation approach that operates on the full four-dimensional cost volume. This direct approach shares the structural benefits of leading stereo matching pipelines, which are known to yield high accuracy. To this day, such approaches have been considered impractical due to the size of the cost volume. We show that the full four-dimensional cost volume can be constructed in a fraction of a second due to its regularity. We then exploit this regularity further by adapting semi-global matching to the four-dimensional setting. This yields a pipeline that achieves significantly higher accuracy than state-of-the-art optical flow methods while being faster than most. Our approach outperforms all published general-purpose optical flow methods on both Sintel and KITTI 2015 benchmarks.
Integrating procedural generation and manual editing of virtual worlds
Because of the increasing detail and size of virtual worlds, designers are more and more urged to consider employing procedural methods to alleviate part of their modeling work. However, such methods are often unintuitive to use, difficult to integrate, and provide little user control, making their application far from straightforward. In our declarative modeling approach, designers are provided with a more productive and simplified virtual world modeling workflow that matches better with their iterative way of working. Using interactive procedural sketching, they can quickly layout a virtual world, while having proper user control at the level of large terrain features. However, in practice, designers require a finer level of control. Integrating procedural techniques with manual editing in an iterative modeling workflow is an important topic that has remained relatively unaddressed until now. This paper identifies challenges of this integration and discusses approaches to combine these methods in such a way that designers can freely mix them, while the virtual world model is kept consistent during all modifications. We conclude that overcoming the challenges mentioned, for example in a declarative modeling context, is instrumental to achieve the much desired adoption of procedural modeling in mainstream virtual world modeling.
A randomised placebo-controlled trial of oral hydrocortisone for treating tobacco withdrawal symptoms
Many smokers experience a decline in cortisol to sub-normal levels during the first days of smoking cessation. A greater decline in cortisol is associated with more intense cigarette withdrawal symptoms, urge to smoke and relapse to smoking. Findings from an uncontrolled study suggest that glucocorticoids could ameliorate cigarette withdrawal. We investigated whether taking oral hydrocortisone would reduce withdrawal symptoms and the desire to smoke on the first day of temporary smoking abstinence compared with placebo. Using a double-blind within-subject randomised crossover design, 48 smokers took a single dose of 40 mg hydrocortisone, 20 mg hydrocortisone or placebo following overnight smoking abstinence. Abstinence was maintained through the afternoon, and withdrawal symptoms and the desire to smoke were rated across the morning. Salivary cortisol was assessed in the afternoon prior to abstinence (baseline) and while abstinent after each treatment. There was a significant dose–response relation between dose of hydrocortisone and reduction in depression and anxiety ratings while abstinent, but there were no other statistically significant associations with dose. Overall, the decline in cortisol following smoking cessation (placebo only) was not significant. Cortisol level on the afternoon of smoking abstinence was not significantly associated with symptom ratings. Supplements of hydrocortisone do not reduce the desire to smoke but may ameliorate withdrawal-related depression and anxiety, although the clinical benefit is slight.
Financial Innovation and the Management and Regulation of Financial Institutions
New security designs, improvements in computer telecommunications technology and advances in the theory of finance have led to revolutionary changes in the structure of financial markets and institutions. This paper provides a functional perspective on the dynamics of institutional change and uses a series of examples to illustrate the breadth and depth of institutional change that is likely to occur. These examples emphasize the role of hedging versus equity capital in managing risk, the need for risk accounting and changes in methods for implementing both regulatory and stabilization public policy.
Step Detection Robust against the Dynamics of Smartphones
A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms.
Cognitive and psychomotor effects in males after smoking a combination of tobacco and cannabis containing up to 69 mg delta-9-tetrahydrocannabinol (THC)
Δ9-Tetrahydrocannabinol (THC) is the main active constituent of cannabis. In recent years, the average THC content of some cannabis cigarettes has increased up to approximately 60 mg per cigarette (20% THC cigarettes). Acute cognitive and psychomotor effects of THC among recreational users after smoking cannabis cigarettes containing such high doses are unknown. The objective of this study was to study the dose–effect relationship between the THC dose contained in cannabis cigarettes and cognitive and psychomotor effects for THC doses up to 69.4 mg (23%). This double-blind, placebo-controlled, randomised, four-way cross-over study included 24 non-daily male cannabis users (two to nine cannabis cigarettes per month). Participants smoked four cannabis cigarettes containing 0, 29.3, 49.1 and 69.4 mg THC on four exposure days. The THC dose in smoked cannabis was linearly associated with a slower response time in all tasks (simple reaction time, visuo-spatial selective attention, sustained attention, divided attention and short-term memory tasks) and motor control impairment in the motor control task. The number of errors increased significantly with increasing doses in the short-term memory and the sustained attention tasks. Some participants showed no impairment in motor control even at THC serum concentrations higher than 40 ng/mL. High feeling and drowsiness differed significantly between treatments. Response time slowed down and motor control worsened, both linearly, with increasing THC doses. Consequently, cannabis with high THC concentrations may be a concern for public health and safety if cannabis smokers are unable to titrate to a high feeling corresponding to a desired plasma THC level.
A hybrid neural network and ARIMA model for water quality time series prediction
Accurate predictions of time series data have motivated the researchers to develop innovative models for water resources management. Time series data often contain both linear and nonlinear patterns. Therefore, neither ARIMA nor neural networks can be adequate in modeling and predicting time series data. The ARIMA model cannot deal with nonlinear relationships while the neural network model alone is not able to handle both linear and nonlinear patterns equally well. In the present study, a hybrid ARIMA and neural network model is proposed that is capable of exploiting the strengths of traditional time series approaches and artificial neural networks. The proposed approach consists of an ARIMA methodology and feed-forward, backpropagation network structure with an optimized conjugated training algorithm. The hybrid approach for time series prediction is tested using 108-month observations of water quality data, including water temperature, boron and dissolved oxygen, during 1996–2004 at Büyük Menderes river, Turkey. Specifically, the results from the hybrid model provide a robust modeling framework capable of capturing the nonlinear nature of the complex time series and thus producing more accurate predictions. The correlation coefficients between the hybrid model predicted values and observed data for boron, dissolved oxygen and water temperature are 0.902, 0.893, and 0.909, respectively, which are satisfactory in common model applications. Predicted water quality data from the hybrid model are compared with those from the ARIMA methodology and neural network architecture using the accuracy measures. Owing to its ability in recognizing time series patterns and nonlinear characteristics, the hybrid model provides much better accuracy over the ARIMA and neural network models for water quality predictions. & 2009 Elsevier Ltd. All rights reserved.
Solving the antidepressant efficacy question: effect sizes in major depressive disorder.
BACKGROUND Numerous reviews and meta-analyses of the antidepressant literature in major depressive disorders (MDD), both acute and maintenance, have been published, some claiming that antidepressants are mostly ineffective and others that they are mostly effective, in either acute or maintenance treatment. OBJECTIVE The aims of this study were to review and critique the latest and most notable antidepressant MDD studies and to conduct our own reanalysis of the US Food and Drug Administration database studies specifically analyzed by Kirsch et al. METHODS We gathered effect estimates of each MDD study. In our reanalysis of the acute depression studies, we corrected analyses for a statistical floor effect so that relative (instead of absolute) effect size differences were calculated. We also critiqued a recent meta-analysis of the maintenance treatment literature. RESULTS Our reanalysis showed that antidepressant benefit is seen not only in severe depression but also in moderate depression and confirmed a lack of benefit for antidepressants over placebo in mild depression. Relative antidepressant versus placebo benefit increased linearly from 5% in mild depression to 12% in moderate depression to 16% in severe depression. The claim that antidepressants are completely ineffective, or even harmful, in maintenance treatment studies involves unawareness of the enriched design effect, which, in that analysis, was used to analyze placebo efficacy. The same problem exists for the standard interpretation of those studies, although they do not prove antidepressant efficacy either, since they are biased in favor of antidepressants. CONCLUSIONS In sum, we conclude that antidepressants are effective in acute depressive episodes that are moderate to severe but are not effective in mild depression. Except for the mildest depressive episodes, correction for the statistical floor effect proves that antidepressants are effective acutely. These considerations only apply to acute depression, however. For maintenance, the long-term efficacy of antidepressants is unproven, but the data do not support the conclusion that they are harmful.
A human gut microbial gene catalogue established by metagenomic sequencing
To understand the impact of gut microbes on human health and well-being it is crucial to assess their genetic potential. Here we describe the Illumina-based metagenomic sequencing, assembly and characterization of 3.3 million non-redundant microbial genes, derived from 576.7 gigabases of sequence, from faecal samples of 124 European individuals. The gene set, ∼150 times larger than the human gene complement, contains an overwhelming majority of the prevalent (more frequent) microbial genes of the cohort and probably includes a large proportion of the prevalent human intestinal microbial genes. The genes are largely shared among individuals of the cohort. Over 99% of the genes are bacterial, indicating that the entire cohort harbours between 1,000 and 1,150 prevalent bacterial species and each individual at least 160 such species, which are also largely shared. We define and describe the minimal gut metagenome and the minimal gut bacterial genome in terms of functions present in all individuals and most bacteria, respectively.
Dropping out: Why are students leaving junior high in China's poor rural areas?
Despite requirements of and support for universal education up to grade 9, there are concerning reports that poor rural areas in China suffer from high and maybe even rising dropout rates. Although aggregated statistics from the Ministry of Education show almost universal compliance with the 9-year compulsory education law, there have been few independent, survey-based studies regarding dropout rates in China. Between 2009 and 2010 we surveyed over 7800 grade 7, 8, and 9 students from 46 randomly selected junior high schools in four counties in two provinces in North and Northwest China to measure the dropout rate. We also used the survey data to examine factors correlated with dropping out, such as the opportunity cost of going to school, household poverty, and poor academic performance. According to the study’s findings, drop out rates between grade 7 and grade 8 reached 5.7% and dropout rates between grade 8 and grade 9 reached 9.0%. In sum, among the total number of students attending junior high school during the first month of the first term of grade 7, 14.2% had left school by the first month of grade 9. Dropout rates were even higher for students that were older, from poorer families (and families in which the parents were not healthy), or were performing more poorly academically. We conclude that although the government’s policy of reducing tuition and fees for junior high students may be necessary, it is not sufficient to solve the dropout problem. 2011 Elsevier Ltd. All rights reserved.
Principles of Persuasion in Social Engineering and Their Use in Phishing
Research on marketing and deception has identified principles of persuasion that influence human decisions. However, this research is scattered: it focuses on specific contexts and produces different taxonomies. In regard to frauds and scams, three taxonomies are often referred in the literature: Cialdini’s principles of influence, Gragg’s psychological triggers, and Stajano et al. principles of scams. It is unclear whether these relate but clearly some of their principles seem overlapping whereas others look complementary. We propose a way to connect those principles and present a merged and reviewed list for them. Then, we analyse various phishing emails and show that our principles are used therein in specific combinations. Our analysis of phishing is based on peer review and further research is needed to make it automatic, but the approach we follow, together with principles we propose, can be applied more consistently and more comprehensively than the original taxonomies.
Smart Lighting Solutions for Smart Cities
Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.
Improving academic performance and mental health through a stress management intervention: outcomes and mediators of change.
Two hundred and nine pupils were randomly allocated to either a cognitive behaviourally based stress management intervention (SMI) group, or a non-intervention control group. Mood and motivation measures were administered pre and post intervention. Standardized examinations were taken 8-10 weeks later. As hypothesized, results indicated that an increase in the functionality of pupils' cognitions served as the mechanism by which mental health improved in the SMI group. In contrast, the control group demonstrated no such improvements. Also, as predicted, an increase in motivation accounted for the SMI group's significantly better performance on the standardized, academic assessments that comprise the United Kingdom's General Certificate of Secondary Education. Indeed, the magnitude of this enhanced performance was, on average, one-letter grade. Discussion focuses on the theoretical and practical implications of these findings.
Computer-aided detection (CAD) of breast masses in mammography: combined detection and ensemble classification.
We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature.
Using Self-Assembled Monolayers to Model Cell Adhesion to the 9th and 10th Type III Domains of Fibronectin†
Most mammalian cells must adhere to the extracellular matrix (ECM) to maintain proper growth and development. Fibronectin is a predominant ECM protein that engages integrin cell receptors through its Arg-Gly-Asp (RGD) and Pro-His-Ser-Arg-Asn (PHSRN) peptide binding sites. To study the roles these motifs play in cell adhesion, proteins derived from the 9th (containing PHSRN) and 10th (containing RGD) type III fibronectin domains were engineered to be in frame with cutinase, a serine esterase that forms a site-specific, covalent adduct with phosphonate ligands. Self-assembled monolayers (SAMs) that present phosphonate ligands against an inert background of tri(ethylene glycol) groups were used as model substrates to immobilize the cutinase-fibronectin fusion proteins. Baby hamster kidney cells attached efficiently to all protein surfaces, but only spread efficiently on protein monolayers containing the RGD peptide. Cells on RGD-containing protein surfaces also displayed defined focal adhesions and organized cytoskeletal structures compared to cells on PHSRN-presenting surfaces. Cell attachment and spreading were shown to be unaffected by the presence of PHSRN when compared to RGD alone on SAMs presenting higher densities of protein, but PHSRN supported an increased efficiency in cell attachment when presented at low protein densities with RGD. Treatment of suspended cells with soluble RGD or PHSRN peptides revealed that both peptides were able to inhibit the attachment of FN10 surfaces. These results support a model wherein PHSRN and RGD bind competitively to integrins--rather than a two-point synergistic interaction--and the presence of PHSRN serves to increase the density of ligand on the substrate and therefore enhance the sticking probability of cells during attachment.
Bilirubin—A Potential Marker of Drug Exposure in Atazanavir-Based Antiretroviral Therapy
The objective of this work was to examine the atazanavir–bilirubin relationship using a population-based approach and to assess the possible application of bilirubin as a readily available marker of atazanavir exposure. A model of atazanavir exposure and its concentration-dependent effect on bilirubin levels was developed based on 200 atazanavir and 361 bilirubin samples from 82 patients receiving atazanavir in the NORTHIV trial. The pharmacokinetics was adequately described by a one-compartment model with first-order absorption and lag-time. The maximum inhibition of bilirubin elimination rate constant (I max) was estimated at 91% (95% CI, 87–94) and the atazanavir concentration resulting in half of I max (IC50) was 0.30 μmol/L (95% CI, 0.24–0.37). At an atazanavir/ritonavir dose of 300/100 mg given once daily, the bilirubin half-life was on average increased from 1.6 to 8.1 h. A nomogram, which can be used to indicate suboptimal atazanavir exposure and non-adherence, was constructed based on model simulations.
Discrepancy and Error in Radiology: Concepts, Causes and Consequences
In all branches of medicine, there is an inevitable element of patient exposure to problems arising from human error, and this is increasingly the subject of bad publicity, often skewed towards an assumption that perfection is achievable, and that any error or discrepancy represents a wrong that must be punished. Radiology involves decision-making under conditions of uncertainty, and therefore cannot always produce infallible interpretations or reports. The interpretation of a radiologic study is not a binary process; the “answer” is not always normal or abnormal, cancer or not. The final report issued by a radiologist is influenced by many variables, not least among them the information available at the time of reporting. In some circumstances, radiologists are asked specific questions (in requests for studies) which they endeavour to answer; in many cases, no obvious specific question arises from the provided clinical details (e.g. “chest pain”, “abdominal pain”), and the reporting radiologist must strive to interpret what may be the concerns of the referring doctor. (A friend of one of the authors, while a resident in a North American radiology department, observed a staff radiologist dictate a chest x-ray reporting stating “No evidence of leprosy”. When subsequently confronted by an irate respiratory physician asking for an explanation of the seemingly-perverse report, he explained that he had no idea what the clinical concerns were, as the clinical details section of the request form had been left blank).
A Review of the Pinned Photodiode for CCD and CMOS Image Sensors
The pinned photodiode is the primary photodetector structure used in most CCD and CMOS image sensors. This paper reviews the development, physics, and technology of the pinned photodiode.
PAMM - A Robotic Aid to the Elderly for Mobility Assistance and Monitoring: A Helping-Hand for the Elderly
Meeting the needs of the elderly presents important technical challenges. In this research, a system concept for a robotic aid to provide mobility assistance and monitoring for the elderly and its enabling technologies are being developed. The system, called PAMM (Personal Aid for Mobility and Monitoring), is intended to assist the elderly living independently or in senior Assisted Living Facilities. It provides physical support and guidance, and it monitors the user's basic vital signs. An experimental test-bed used to evaluate the PAMM technology is described. This test-bed has a cane based configuration with a non-holonomic drive. Preliminary field trials at an eldercare facility are presented.
Free vibration analysis of a cantilever composite beam with multiple cracks
This study is an investigation of the effects of cracks on the dynamical characteristics of a cantilever composite beam, made of graphite fibre-reinforced polyamide. The finite element and the componentmode synthesismethods are used tomodel the problem. The cantilever composite beam divided into several components from the crack sections. Stiffness decreases due to cracks are derived from the fracturemechanics theory as the inverse of the compliancematrix calculatedwith theproper stress intensity factors and strain energy release rate expressions. The effects of the location and depth of the cracks, and the volume fraction and orientation of the fibre on the natural frequencies and mode shapes of the beam with transverse non-propagating open cracks, are explored. The results of the study lead to conclusions that, presented method is adequate for the vibration analysis of cracked cantilever composite beams, and by using the drop in the natural frequencies and the change in the mode shapes, the presence and nature of cracks in a structure can be detected. 2003 Elsevier Ltd. All rights reserved.
Mathematical Attacks on RSA Cryptosystem
In this paper some of the most common attacks against Rivest, Shamir, and Adleman (RSA) cryptosystem are presented. We describe the integer factoring attacks, attacks on the underlying mathematical function, as well as attacks that exploit details in implementations of the algorithm. Algorithms for each type of attacks are developed and analyzed by their complexity, memory requirements and area of usage.
Covariate Shift in Hilbert Space: A Solution via Sorrogate Kernels
Covariate shift is an unconventional learning scenario in which training and testing data have different distributions. A general principle to solve the problem is to make the training data distribution similar to that of the test domain, such that classifiers computed on the former generalize well to the latter. Current approaches typically target on sample distributions in the input space, however, for kernel-based learning methods, the algorithm performance depends directly on the geometry of the kernel-induced feature space. Motivated by this, we propose to match data distributions in the Hilbert space, which, given a pre-defined empirical kernel map, can be formulated as aligning kernel matrices across domains. In particular, to evaluate similarity of kernel matrices defined on arbitrarily different samples, the novel concept of surrogate kernel is introduced based on the Mercer’s theorem. Our approach caters the model adaptation specifically to kernel-based learning mechanism, and demonstrates promising results on several real-world applications. Proceedings of the 30 th International Conference on Machine Learning, Atlanta, Georgia, USA, 2013. JMLR: W&CP volume 28. Copyright 2013 by the author(s).
Financialised Capitalism: Crisis and Financial Expropriation
Th e current crisis is one outcome of the fi nancialisation of contemporary capitalism. It arose in the USA because of the enormous expansion of mortgage-lending, including to the poorest layers of the working class. It became general because of the trading of debt by fi nancial institutions. Th ese phenomena are integral to fi nancialisation. During the last three decades, large enterprises have turned to open markets to obtain fi nance, forcing banks to seek alternative sources of profi t. One avenue has been provision of fi nancial services to individual workers. Th is trend has been facilitated by the retreat of public provision from housing, pensions, education, and so on. A further avenue has been to adopt investment-banking practices in open fi nancial markets. Th e extraction of fi nancial profi ts directly out of personal income constitutes fi nancial expropriation. Combined with investment-banking, it has catalysed the current gigantic crisis. More broadly, fi nancialisation has sustained the emergence of new layers of rentiers, defi ned primarily through their relation to the fi nancial system rather than ownership of loanable capital. Finally, fi nancialisation has posed important questions regarding fi nance-capital and imperialism.
Survey on Cloud Computing Security
Cloud is a modern age computing paradigm which has potential to bring great benefits to information and Communication Technology (ICT) and ICT enabled business. The term Cloud comes from the graphic that was often used to show the heterogeneous networks and complex infrastructure. This graphic was adopted to describe the many aspects of Cloud Computing. In this paper, we aim to identify the security issues in cloud computing. Here we present an analysis of security issues in a cloud environment. Solutions exist to a certain extent for various issues. Keywords—Cloud computing, security challenges, threats
Deep Layered Learning in MIR
Deep learning has boosted the performance of many music information retrieval (MIR) systems in recent years. Yet, the complex hierarchical arrangement of music makes end-to-end learning hard for some MIR tasks – a very deep and flexible processing chain is necessary to model some aspect of music audio. Representations involving tones, chords, and rhythm are fundamental building blocks of music. This paper discusses how these can be used as intermediate targets and priors in MIR to deal with structurally complex learning problems, with learning modules connected in a directed acyclic graph. It is suggested that this strategy for inference, referred to as deep layered learning (DLL), can help generalization by (1) – enforcing the validity and invariance of intermediate representations during processing, and by (2) – letting the inferred representations establish the musical organization to support higher-level invariant processing. A background to modular music processing is provided together with an overview of previous publications. Relevant concepts from information processing, such as pruning, skip connections, and performance supervision are reviewed within the context of DLL. A test is finally performed, showing how layered learning affects pitch tracking. It is indicated that especially offsets are easier to detect if guided by extracted framewise fundamental frequencies.
NNIME: The NTHU-NTUA Chinese interactive multimodal emotion corpus
The increasing availability of large-scale emotion corpus along with the advancement in emotion recognition algorithms have enabled the emergence of next-generation human-machine interfaces. This paper describes a newly-collected multimodal corpus, i.e., the NTHU-NTUA Chinese Interactive Emotion Corpus (NNIME). The database is a result of the collaborative work between engineers and drama experts. This database includes recordings of 44 subjects engaged in spontaneous dyadic spoken interactions. The multimodal data includes approximately 11-hour worth of audio, video, and electrocardiogram data recorded continuously and synchronously. The database is also completed with a rich set of emotion annotations on discrete and continuous-in-time annotation by a total of 49 annotators. Thees emotion annotations include a diverse perspectives: peer-report, director-report, self-report, and observer-report. This carefully-engineered data collection and annotation processes provide an additional valuable resource to quantify and investigate various aspects of affective phenomenon and human communication. To our best knowledge, the NNIME is one of the few large-scale Chinese affective dyadic interaction database that have been systematically collected, organized, and to be publicly-released to the research community.
Muiltiobjective Optimization Using Nondominated Sorting in Genetic Algorithms
In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands that the user have knowledge about the underlying problem. Moreover, in solving multiobjective problems, designers may be interested in a set of Pareto-optimal points, instead of a single point. Since genetic algorithms (GAs) work with a population of points, it seems natural to use GAs in multiobjective optimization problems to capture a number of solutions simultaneously. Although a vector evaluated GA (VEGA) has been implemented by Schaffer and has been tried to solve a number of multiobjective problems, the algorithm seems to have bias toward some regions. In this paper, we investigate Goldberg's notion of nondominated sorting in GAs along with a niche and speciation method to find multiple Pareto-optimal points simultaneously. The proof-of-principle results obtained on three problems used by Schaffer and others suggest that the proposed method can be extended to higher dimensional and more difficult multiobjective problems. A number of suggestions for extension and application of the algorithm are also discussed.
Inhalation exposure to ambient polycyclic aromatic hydrocarbons and lung cancer risk of Chinese population.
An Euler atmospheric transport model (Canadian Model for Environmental Transport of Organochlorine Pesticides, CanMETOP) was applied and validated to estimate polycyclic aromatic hydrocarbon (PAH) ambient air concentrations at ground level in China based on a high-resolution emission inventory. The results were used to evaluate lung cancer risk for the Chinese population caused by inhalation exposure to PAHs. The uncertainties of the transport model, exposure, and risk analysis were assessed by using Monte Carlo simulation, taking into consideration the variation in PAH emission, aerosol and OH radical concentrations, dry deposition, respiration rate, and genetic susceptibility. The average benzo[a]pyrene equivalent concentration (B[a]P(eq)) was 2.43 [ approximately 1.29-4.50 as interquartile range (IR)] ng/m(3). The population-weighted B[a]P(eq) was 7.64 (IR, approximately 4.05-14.1) ng/m(3) because of the spatial overlap of the emissions and population density. It was estimated that 5.8% (IR, approximately 2.0-11%) of China's land area, where 30% (IR, approximately 17-43%) of the population lives, exceeded the national ambient B[a]P(eq) standard of 10 ng/m(3). Taking into consideration the variation in exposure concentration, respiration rate, and susceptibility, the overall population attributable fraction (PAF) for lung cancer caused by inhalation exposure to PAHs was 1.6% (IR, approximately 0.91-2.6%), corresponding to an excess annual lung cancer incidence rate of 0.65 x 10(-5). Although the spatial variability was high, the lung cancer risk in eastern China was higher than in western China, and populations in major cities had a higher risk of lung cancer than rural areas. An extremely high PAF of >44% was estimated in isolated locations near small-scale coke oven operations.
BUSSTEPP Lectures on Supersymmetry
This is the written version of the supersymmetry lectures delivered at the 30th and 31st British Universities Summer Schools in Theoretical Elementary Particle Physics (BUSSTEPP) held in Oxford in September 2000 and in Manchester in August-September 2001.
EFFICIENCY ENHANCEMENT OF BASE STATION POWER AMPLIFIERS USING DOHERTY TECHNIQUE
The power amplifiers are typically the most power-consuming block in wireless communication systems. Spectrum is expensive, and newer technologies demand transmission of maximum amount of data with minimum spectrum usage. This requires sophisticated modulation techniques, leading to wide, dynamic signals that require linear amplification. Although linear amplification is achievable, it always comes at the expense of efficiency. Most of the modern wireless applications such as WCDMA use non-constant envelope modulation techniques with a high peak to average ratio. Linearity being a critical issue, power amplifiers implemented in such applications are forced to operate at a backed off region from saturation. Therefore, in order to overcome the battery lifetime limitation, a design of a high efficiency power amplifier that can maintain the efficiency for a wider range of radio frequency input signal is the obvious solution. A new technique that improves the drain efficiency of a linear power amplifier such as Class A or AB, for a wider range of output power, has been investigated in this research. The Doherty technique consists of two amplifiers in parallel; in such a way that the combination enhances the power added efficiency of the main amplifier at 6dB back off from the maximum output power. The classes of operation of power amplifier (A, AB,B, C etc), and the design techniques are presented. Design of a 2.14 GHz Doherty power amplifier has been provided in chapter 4. This technique shows a 15% increase in power added efficiency at 6 dB back off from the compression point. This PA can be implemented in WCDMA base station transmitter.
A Replicable Web-based Negotiation Server for E-Commerce
This paper describes our ongoing R&D effort in developing a replicable, Web-based negotiation server to conduct bargaining-type negotiations between clients (i.e., buyers and sellers) in e-commerce. Multiple copies of this server can be paired with existing Web-servers to provide negotiation capabilities. Each client can select a trusted negotiation server to represent his/her interests. Web-based GUI tools are used by clients in a build-time registration process to specify the requirements, constraints, negotiation strategic rules, and preference scoring methods related to the buying or selling of a product. The registration information is used by the negotiation servers to conduct negotiations automatically on behalf of the clients. In this paper, we present the architecture of the negotiation server and the framework for automated negotiations, and describe a number of communication primitives, which make up the negotiation protocol. We have developed a constraint satisfaction processor (CSP) to evaluate a negotiation proposal against the registered constraints. An Event-Trigger-Rule (ETR) server manages events and triggers the execution of strategic rules, which may relax constraints, notify clients, or perform other operations. Strategic rules can be added and modified at run-time to deal with the dynamic nature of negotiations. A cost-benefit analysis performs quantitative analysis of alternative negotiation conditions. We have implemented a prototype system to demonstrate automated negotiations among buyers and suppliers in a supply chain management system.
Safety and efficacy of intravenously administered tedisamil for rapid conversion of recent-onset atrial fibrillation or atrial flutter.
OBJECTIVES The goal of the present study was to assess the efficacy and safety of intravenous tedisamil, a new antiarrhythmic compound, for conversion of recent-onset atrial fibrillation (AF) or atrial flutter (AFL) to normal sinus rhythm (NSR). BACKGROUND Tedisamil is a novel antiarrhythmic drug with predominantly class III activity. Its efficacy and safety for conversion of recent onset AF or AFL to NSR is not known. METHODS This was a multicenter, double-blind, randomized, placebo-controlled, sequential ascending dose-group trial. A total of 201 patients with symptomatic AF or AFL of 3 to 48 h duration were enrolled in a two-stage study. During stage 1, patients were randomized to receive tedisamil at 0.4 mg/kg body weight or matching placebo; during stage 2, patients received tedisamil at 0.6 mg/kg body weight or matching placebo. Treatments were given as single intravenous infusions. The primary study end point consisted of the percentage of patients converting to NSR for at least 60 s within 2.5 h. RESULTS Of 175 patients representing the intention-to-treat sample, conversion to NSR was observed in 41% (25/61) of the tedisamil 0.4 mg/kg group, 51% (27 of 53) of the tedisamil 0.6 mg/kg group, and 7% (4/59) of the placebo group (p < 0.001 for both tedisamil groups vs. placebo). Average time to conversion was 35 min in patients receiving tedisamil. There were two instances of self-terminating ventricular tachycardia: one episode of torsade de pointes and one of monomorphic ventricular tachycardia, both in patients receiving 0.6 mg/kg tedisamil. CONCLUSIONS Tedisamil at dosages of 0.4 and 0.6 mg/kg was superior to placebo in converting AF or AFL. Tedisamil has a rapid onset of action leading to conversion within 30 to 40 min in the majority of responders.
Wealth Distribution and Social Mobility in the Us: A Quantitative Approach
We quantitatively identify the factors that drive wealth dynamics in the U.S. and are consistent with its skewed cross-sectional distribution and with social mobility. We concentrate on three critical factors: i) skewed earnings, ii) differential saving rates across wealth levels, and iii) stochastic idiosyncratic returns to wealth. All of these are fundamental for matching both distribution and mobility. The stochastic process for returns which best fits the cross-sectional distribution of wealth and social mobility in the U.S. shares several statistical properties with those of the returns to wealth uncovered by Fagereng et al. (2017) from tax records in Norway.
A study on virtual machine deployment for application outsourcing in mobile cloud computing
In mobile cloud computing, application offloading is implemented as a software level solution for augmenting computing potentials of smart mobile devices. VM is one of the prominent approaches for offloading computational load to cloud server nodes. A challenging aspect of such frameworks is the additional computing resources utilization in the deployment and management of VM on Smartphone. The deployment of Virtual Machine (VM) requires computing resources for VM creation and configuration. The management of VM includes computing resources utilization in the monitoring of VM in entire lifecycle and physical resources management for VM on Smartphone. The objective of this work is to ensure that VM deployment and management requires additional computing resources on mobile device for application offloading. This paper analyzes the impact of VM deployment and management on the execution time of application in different experiments. We investigate VM deployment and management for application processing in simulation environment by using CloudSim, which is a simulation toolkit that provides an extensible simulation framework to model the simulation of VM deployment and management for application processing in cloud-computing infrastructure. VM deployment and management in application processing is evaluated by analyzing VM deployment, the execution time of applications and total execution time of the simulation. The analysis concludes that VM deployment and management require additional resources on the computing host. Therefore, VM deployment is a heavyweight approach for process offloading on smart mobile devices.
Potential adverse events of endosseous dental implants penetrating the maxillary sinus: long-term clinical evaluation.
OBJECTIVES/HYPOTHESIS The aim of this study was to evaluate the nature and incidence of long-term maxillary sinus adverse events related to endosseous implant placement with protrusion into the maxillary sinus. STUDY DESIGN Retrospective cohort study. METHODS All patients who underwent placement of endosseous dental implants with clinical evidence of implant penetration into the maxillary sinus with membrane perforation were included in this study. Only patients with a minimum follow-up of 5 years after implant placement were included in this study. Maxillary sinus assessment was both clinical and radiological. RESULTS Eighty-three implants with sinus membrane perforation in 70 patients met the study's inclusion criteria. Mean age was 65.96 years ± 14.23. Twelve patients had more than one implant penetrating the maxillary sinus, and seven of them had bilateral sinus perforation. Estimated implant penetration was ≤ 3 mm in all cases. The average clinical and radiological follow-up was 9.98 years ± 3.74 (range 60-243 months). At the follow-up appointments, there were no clinical or radiological signs of sinusitis in any patient. CONCLUSION This long-term study, spreading over a period of up to 20 years, indicates that no sinus complication was observed following implant penetration into the maxillary sinus. Furthermore, absence of occurrence of such complications is related to the maintenance of successful osseointegration. A contrario, and in the presence of an acute or chronic maxillary sinusitis, the differential diagnosis must always consider other potential odontogenic and nonodontogenic etiologies.
Piezoelectric Deflection Sensor for a Bi-Bellows Actuator
We employ a piezoelectric transducer polyvinylidene fluoride (PVDF) for a new application: tracking the shape of a highly flexible cantilever. The sensor was attached to the bi-bellows, a pneumatic actuator developed by the authors. We demonstrate how the sensor responds to changes of the actuator's shape and discuss its limitations. The combination of an internal pressure sensor and a PVDF sensor enables measurements of external forces applied on the actuator, in addition to its self position.
Deep Learning
Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users’ interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition and speech recognition, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules, analysing particle accelerator data, reconstructing brain circuits, and predicting the effects of mutations in non-coding DNA on gene expression and disease. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding, particularly topic classification, sentiment analysis, question answering and language translation. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress.
An ILP approach for mapping AUTOSAR runnables on multi-core architectures
AUTOSAR (AUTomotive Open System ARchitecture) standard was developed in order to manage the complexity of automotive E/E (Electrical/Electronic) architectures. It provides a layered modular configurable software architecture in order to facilitate the transfer and the update of applications. Recently, multi-core architectures became supported by AUTOSAR. The design and the implementation of AUTOSAR applications on multi-core architectures have raised new challenges. One of these challenges is related to the static configuration of the OS (Operating System). On a multi-core architecture, an important step of this configuration consists in mapping the runnables (pieces of the applicative code) to the cores. This mapping has to be done efficiently, i.e. satisfy real-time, functional, and safety requirements. For instance, the behavior of the application is affected by several factors like the communication overhead. This paper presents an ILP (Integer Linear Programming) formulation of the mapping process of AUTOSAR runnables on a multi-core architecture, with the aim to minimize inter-core communication and to balance the processor load.
Tree-based Convolution for Sentence Modeling
In sentence modeling and classification, convolutional neural network approaches have recently achieved state-of-the-art results, but all such efforts process word vectors sequentially and neglect long-distance dependencies. To exploit both deep learning and linguistic structures, we propose a tree-based convolutional neural network model which exploit various long-distance relationships between words. Our model improves the sequential baselines on all three sentiment and question classification tasks, and achieves the highest published accuracy on TREC.
Moving camera video stabilization using homography consistency
Videos recorded on moving cameras are often known to be shaky due to unstable carrier motion and the video stabilization problem involves inferring the intended smooth motion to keep and the unintended shaky motion to remove. However, conventional methods typically require proper, scenario-specific parameter setting, which does not generalize well across different scenarios. Moreover, we observe that a stable video should satisfy two conditions: a smooth trajectory and consistent inter-frame transition. While conventional methods only target at the former condition, we address these two issues at the same time. In this paper, we propose a homography consistency based algorithm to directly extract the optimal smooth trajectory and evenly distribute the inter-frame transition. By optimizing in the homography domain, our method does not need further matrix decomposition and parameter adjustment, automatically adapting to all possible types of motion (eg. translational or rotational) and video properties (eg. frame rates). We test our algorithm on translational videos recorded from a car and rotational videos from a hovering aerial vehicle, both of high and low frame rates. Results show our method widely applicable to different scenarios without any need of additional parameter adjustment.
Practical Identity-Based Encryption Without Random Oracles
We present an Identity Based Encryption (IBE) system that is fully secure in the standard model and has several advantages over previous such systems – namely, computational efficiency, shorter public parameters, and a “tight” security reduction, albeit to a stronger assumption that depends on the number of private key generation queries made by the adversary. Our assumption is a variant of Boneh et al.’s decisional Bilinear Diffie-Hellman Exponent assumption, which has been used to construct efficient hierarchical IBE and broadcast encryption systems. The construction is remarkably simple. It also provides recipient anonymity automatically, providing a second (and more efficient) solution to the problem of achieving anonymous IBE without random oracles. Finally, our proof of CCA2 security, which has more in common with the security proof for the Cramer-Shoup encryption scheme than with security proofs for other IBE systems, may be of independent interest.
Urban occupations in a siberian city (Tobolsk, 1897)
The article studies the late 19th-century occupational structure of Tobolsk in the context of other major Siberian cities. Many urban centres were strongholds for governing this huge territory, and Tobolsk was a typical provincial capital in this regard. In the most economically developed Western and Southern Siberian provinces, cities were not only administrative hubs, but also cultural and economic centres. The authors look at how urban populations were distributed among different occupational groups and social classes, and what role gender and family relations played in terms of employment. This is important, as it may help understand whether Russia’s huge eastern provinces were ready for the transformations which started just two decades after the period whence the main source material of the article originates. The research is based on the first general census of the Russian Empire in 1897. The archives have not preserved primary census manuscripts as a unified collection: so far, only scattered manuscripts have emerged. Clearly, the use of the individual-level nominative census data found for Tobolsk considerably broadens the scope of the research, which was previously limited to aggregate data. The aggregate data provide an opportunity to characterise employment in Siberian cities more generally, demonstrating the occupational specificity of the ‘military’ and ‘agrarian’ cities as well as the provincial centres of Western and Eastern Siberia. The authors more closely analyse the nominative 1897 census data using the database ‘Tobolsk Population in 1897’, which contains information about 92.5 % of employed citizens. The individual-level data made it possible to reconstruct the age and gender structure of the economically active population of the provincial centre, to study the occupations of different estate groups, to look into specific features of secondary occupations, and to see the family’s influence on the choice of occupation. All the employment data on the Siberian urban population were coded according to the HISCO standard.
Resonant-Clock Design for a Power-Efficient, High-Volume x86-64 Microprocessor
AMD's 32-nm x86-64 core code-named “Piledriver” features a resonant global clock distribution to reduce clock distribution power while maintaining a low clock skew. To support a wide range of operating frequencies expected of the core, the global clock system operates in two modes: a resonant-clock (rclk) mode for energy-efficient operation over a desired frequency range and a conventional, direct-drive mode (cclk) to support low-frequency operation. This dual-mode feature was implemented with minimal area impact to achieve both reduced average power dissipation and improved power-constrained performance. In Piledriver, resonant clocking achieves a peak 25% global clock power reduction at 75 °C, which translates to a 4.5% reduction in average application core power.
A Single-Phase Single-Stage High Step-Up AC–DC Matrix Converter Based on Cockcroft–Walton Voltage Multiplier With PFC
This paper proposes a high-performance transformerless single-stage high step-up ac-dc matrix converter based on Cockcroft-Walton (CW) voltage multiplier. Deploying a four-bidirectional-switch matrix converter between the ac source and CW circuit, the proposed converter provides high quality of line conditions, adjustable output voltage, and low output ripple. The matrix converter is operated with two independent frequencies. One of which is associated with power factor correction (PFC) control, and the other is used to set the output frequency of the matrix converter. Moreover, the relationship among the latter frequency, line frequency, and output ripple will be discussed. This paper adopts one-cycle control method to achieve PFC, and a commercial control IC associating with a preprogrammed complex programmable logic device is built as the system controller. The operation principle, control strategy, and design considerations of the proposed converter are all detailed in this paper. A 1.2-kV/500-W laboratory prototype of the proposed converter is built for test, measurement, and evaluation. At full-load condition, the measured power factor, the system efficiency, and the output ripple factor are 99.9%, 90.3%, and 0.3%, respectively. The experimental results demonstrate the high performance of the proposed converter and the validity for high step-up ac-dc applications.
Corporate and Governmental Deviance: Problems of Organizational Behavior in Contemporary Society
Corporate and Governmental Deviance , Fifth Edition, is a reader intended for use in courses on corporate crime in departments of sociology and criminology. This edition has been updated with several new readings, including an article on the recently uncovered government experiments with radiation. This book is intended for undergraduate and graduate courses in criminology, social deviance and social problems.
Treemap: An O(log n) algorithm for indoor simultaneous localization and mapping
This article presents a very efficient SLAM algorithm that works by hierarchically dividing a map into local regions and subregions. At each level of the hierarchy each region stores a matrix representing some of the landmarks contained in this region. To keep those matrices small, only those landmarks are represented that are observable from outside the region. A measurement is integrated into a local subregion using O(k2) computation time for k landmarks in a subregion. When the robot moves to a different subregion a full leastsquare estimate for that region is computed in only O(k3 log n) computation time for n landmarks. A global least square estimate needs O(kn) computation time with a very small constant (12.37 ms for n = 11300). The algorithm is evaluated for map quality, storage space and computation time using simulated and real experiments in an office environment.
Job recommender systems: A survey
The personalized recommender system is proposed to solve the problem of information overload and widely applied in many domains. The job recommender systems for job recruiting domain have emerged and enjoyed explosive growth in the last decades. User profiles and recommendation technologies in the job recommender system have gained attention and investigated in academia and implemented for some application cases in industries. In this paper, we introduce some basic concepts of user profile and some common recommendation technologies based on the existing research. Finally, we survey some typical job recommender systems which have been achieved and have a general comprehension of job recommender systems.
Automatic Generation of Personalized Chinese Handwriting Characters
This paper presents a novel algorithmic method for automatically generating personal handwriting styles of Chinese characters through an example-based approach. The method first splits a whole Chinese character into multiple constituent parts, such as strokes, radicals, and frequent character components. The algorithm then analyzes and learns the characteristics of character handwriting styles both defined in the Chinese national font standard and those exhibited in a person's own handwriting records. In such an analysis process, we adopt a parametric representation of character shapes and also examine the spatial relationships between multiple constituent components of a character. By imitating shapes of individual character components as well as the spatial relationships between them, the proposed method can automatically generate personalized handwritings following an example-based approach. To explore the quality of our automatic generation algorithm, we compare the computer generated results with the authentic human handwriting samples, which appear satisfying for entertainment or mobile applications as agreed by Chinese subjects in our user study.
Preliminary safety assessment for a sectorless ATM concept
In a sectorless air traffic management concept the airspace is no longer divided into sectors but regarded as one piece. A number of aircraft, not necessarily in the same airspace region, are assigned to each air traffic controller who is then responsible for these aircraft from their entry into the airspace to their exit. These individually assigned flights can be anywhere in the airspace and therefore in different traffic situations. This means that air traffic controllers will manage flights which may not be geographically connected. Such a concept change will necessitate also a significant change in the controllers' routines and the support tools. Naturally, the question of safety arises regarding new procedures and systems. This paper provides a preliminary safety assessment for a sectorless air traffic management concept. The assessment is based on the Single European Sky ATM Research (SESAR) Safety Reference Material which was originally developed for SESAR purposes. This success-based approach stresses the positive contribution of a new concept while traditional approaches mainly consider the negative effect of possible hazards and failures. Based on validation activities including realtime simulations we have developed safety acceptance criteria and safety objectives for a sectorless air traffic management (ATM) concept. Starting from these we have sketched the safety performance requirement model and deduce the first safety requirements for normal conditions, abnormal conditions and in the case of internal system failures.
Marginalized Denoising Autoencoders for Domain Adaptation
Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters — in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB, significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks.
Video quality for face detection, recognition, and tracking
Many distributed multimedia applications rely on video analysis algorithms for automated video and image processing. Little is known, however, about the minimum video quality required to ensure an accurate performance of these algorithms. In an attempt to understand these requirements, we focus on a set of commonly used face analysis algorithms. Using standard datasets and live videos, we conducted experiments demonstrating that the algorithms show almost no decrease in accuracy until the input video is reduced to a certain critical quality, which amounts to significantly lower bitrate compared to the quality commonly acceptable for human vision. Since computer vision percepts video differently than human vision, existing video quality metrics, designed for human perception, cannot be used to reason about the effects of video quality reduction on accuracy of video analysis algorithms. We therefore investigate two alternate video quality metrics, blockiness and mutual information, and show how they can be used to estimate the critical video qualities for face analysis algorithms.
High-performance complex event processing over streams
In this paper, we present the design, implementation, and evaluation of a system that executes complex event queries over real-time streams of RFID readings encoded as events. These complex event queries filter and correlate events to match specific patterns, and transform the relevant events into new composite events for the use of external monitoring applications. Stream-based execution of these queries enables time-critical actions to be taken in environments such as supply chain management, surveillance and facility management, healthcare, etc. We first propose a complex event language that significantly extends existing event languages to meet the needs of a range of RFID-enabled monitoring applications. We then describe a query plan-based approach to efficiently implementing this language. Our approach uses native operators to efficiently handle query-defined sequences, which are a key component of complex event processing, and pipeline such sequences to subsequent operators that are built by leveraging relational techniques. We also develop a large suite of optimization techniques to address challenges such as large sliding windows and intermediate result sizes. We demonstrate the effectiveness of our approach through a detailed performance analysis of our prototype implementation under a range of data and query workloads as well as through a comparison to a state-of-the-art stream processor.
Understanding the Effective Receptive Field in Deep Convolutional Neural Networks
We study characteristics of receptive fields of units in deep convolutional networks. The receptive field size is a crucial issue in many visual tasks, as the output must respond to large enough areas in the image to capture information about large objects. We introduce the notion of an effective receptive field, and show that it both has a Gaussian distribution and only occupies a fraction of the full theoretical receptive field. We analyze the effective receptive field in several architecture designs, and the effect of nonlinear activations, dropout, sub-sampling and skip connections on it. This leads to suggestions for ways to address its tendency to be too small.