title
stringlengths
8
300
abstract
stringlengths
0
10k
Electrical energy storage for the grid: a battery of choices.
The increasing interest in energy storage for the grid can be attributed to multiple factors, including the capital costs of managing peak demands, the investments needed for grid reliability, and the integration of renewable energy sources. Although existing energy storage is dominated by pumped hydroelectric, there is the recognition that battery systems can offer a number of high-value opportunities, provided that lower costs can be obtained. The battery systems reviewed here include sodium-sulfur batteries that are commercially available for grid applications, redox-flow batteries that offer low cost, and lithium-ion batteries whose development for commercial electronics and electric vehicles is being applied to grid storage.
A cost-effective context memory structure for dynamically reconfigurable processors
Multicontext reconfigurable processors can switch its configuration in a single clock cycle by providing a context memory in each of the processing elements. Although these processors have proven to be powerful in many applications, the number of contexts is often not enough. The context translation table which translates the global instruction pointer, or the global logical context number, into a local physical context number is proposed to realize a larger application while reducing the actual context memories. Our evaluation using NEC Electronics' DRP-1 shows that the proposed method is effective when the size of the tile is small and the number of context is large. In the most efficient case, the required number of contexts is reduced to 25%, and the total amount of configuration data becomes 6.9%. The template configuration method which extends this idea harnesses the power of multicontext devices by storing basic contexts as templates and combining them to form the actual contexts. While effective in theory, our evaluation shows that the return in adopting such mechanisms in more finer processors as the DRP-1 is minimal where the size of the context memory adds up relative to the number of processing units.
Class Noise vs. Attribute Noise: A Quantitative Study
Real-world data is never perfect and can often suffer from corruptions (noise) that may impact interpretations of the data, models created from the data and decisions made based on the data. Noise can reduce system performance in terms of classification accuracy, time in building a classifier and the size of the classifier. Accordingly, most existing learning algorithms have integrated various approaches to enhance their learning abilities from noisy environments, but the existence of noise can still introduce serious negative impacts. A more reasonable solution might be to employ some preprocessing mechanisms to handle noisy instances before a learner is formed. Unfortunately, rare research has been conducted to systematically explore the impact of noise, especially from the noise handling point of view. This has made various noise processing techniques less significant, specifically when dealing with noise that is introduced in attributes. In this paper, we present a systematic evaluation on the effect of noise in machine learning. Instead of taking any unified theory of noise to evaluate the noise impacts, we differentiate noise into two categories: class noise and attribute noise, and analyze their impacts on the system performance separately. Because class noise has been widely addressed in existing research efforts, we concentrate on attribute noise. We investigate the relationship between attribute noise and classification accuracy, the impact of noise at different attributes, and possible solutions in handling attribute noise. Our conclusions can be used to guide interested readers to enhance data quality by designing various noise handling mechanisms.
Bijective proofs of Gould's and Rothe's identities
We first give a bijective proof of Gould's identity in the model of binary words. Then we deduce Rothe's identity from Gould's identity again by a bijection, which also leads to a double-sum extension of the q-Chu-Vandermonde formula.
Paralleling Of Power MOSFETs For Higher Power Output
For worst case parameter mismatch, modest levels of unbalance are predicted through the use of minimum gate decoupling, dynamic load lines with high Q values, common source inductance or high yield screening. Each technique is evaluated in terms of current unbalance, transition energy, peak turn-off voltage and parasitic oscillations, as appropriate, for various pulse duty cycles and frequency ranges.
Repetitive transcranial magnetic stimulation to SMA worsens complex movements in Parkinson ' s disease
Objectives: To evaluate the therapeutic potential of repetitive transcranial magnetic stimulation (rTMS) for Parkinson's disease (PD) by delivering stimulation at higher intensity and frequency over longer time than in previous research. Promising bene®cial effects on movement during or after rTMS have been reported. Methods: Ten patients with idiopathic PD were enrolled in a randomized crossover study comparing active versus sham rTMS to the supplementary motor area (SMA). Assessments included reaction and movement times (RT/MT), quantitative spiral analysis, timed motor performance tests, United Parkinson's Disease Rating Scale (UPDRS), patient self-report and guess as to stimulation condition. Results: Two of 10 patients could not tolerate the protocol. Thirty to 45 min following stimulation, active rTMS as compared with sham stimulation worsened spiral drawing (P ˆ 0:001) and prolonged RT in the most affected limb (P ˆ 0:030). No other signi®cant differences were detected. Conclusions: We sought clinically promising improvement in PD but found subclinical worsening of complex and preparatory movement following rTMS to SMA. These results raise safety concerns regarding the persistence of dysfunction induced by rTMS while supporting the value of rTMS as a research tool. Studies aimed at understanding basic mechanisms and timing of rTMS effects are needed. q 2001 Elsevier Science Ireland Ltd. All rights reserved.
Metabolic Signatures of Cultured Human Adipocytes from Metabolically Healthy versus Unhealthy Obese Individuals
BACKGROUND AND AIMS Among obese subjects, metabolically healthy and unhealthy obesity (MHO/MUHO) can be differentiated: the latter is characterized by whole-body insulin resistance, hepatic steatosis, and subclinical inflammation. Aim of this study was, to identify adipocyte-specific metabolic signatures and functional biomarkers for MHO versus MUHO. METHODS 10 insulin-resistant (IR) vs. 10 insulin-sensitive (IS) non-diabetic morbidly obese (BMI >40 kg/m2) Caucasians were matched for gender, age, BMI, and percentage of body fat. From subcutaneous fat biopsies, primary preadipocytes were isolated and differentiated to adipocytes in vitro. About 280 metabolites were investigated by a targeted metabolomic approach intracellularly, extracellularly, and in plasma. RESULTS/INTERPRETATION Among others, aspartate was reduced intracellularly to one third (p = 0.0039) in IR adipocytes, pointing to a relative depletion of citric acid cycle metabolites or reduced aspartate uptake in MUHO. Other amino acids, already known to correlate with diabetes and/or obesity, were identified to differ between MUHO's and MHO's adipocytes, namely glutamine, histidine, and spermidine. Most species of phosphatidylcholines (PCs) were lower in MUHO's extracellular milieu, though simultaneously elevated intracellularly, e.g., PC aa C32∶3, pointing to increased PC synthesis and/or reduced PC release. Furthermore, altered arachidonic acid (AA) metabolism was found: 15(S)-HETE (15-hydroxy-eicosatetraenoic acid; 0 vs. 120pM; p = 0.0014), AA (1.5-fold; p = 0.0055) and docosahexaenoic acid (DHA, C22∶6; 2-fold; p = 0.0033) were higher in MUHO. This emphasizes a direct contribution of adipocytes to local adipose tissue inflammation. Elevated DHA, as an inhibitor of prostaglandin synthesis, might be a hint for counter-regulatory mechanisms in MUHO. CONCLUSION/INTERPRETATION We identified adipocyte-inherent metabolic alterations discriminating between MHO and MUHO.
Does acute passive stretching increase muscle length in children with cerebral palsy?
BACKGROUND Children with spastic cerebral palsy experience increased muscle stiffness and reduced muscle length, which may prevent elongation of the muscle during stretch. Stretching performed either by the clinician, or children themselves is used as a treatment modality to increase/maintain joint range of motion. It is not clear whether the associated increases in muscle-tendon unit length are due to increases in muscle or tendon length. The purpose was to determine whether alterations in ankle range of motion in response to acute stretching were accompanied by increases in muscle length, and whether any effects would be dependent upon stretch technique. METHODS Eight children (6-14 y) with cerebral palsy received a passive dorsiflexion stretch for 5 × 20 s to each leg, which was applied by a physiotherapist or the children themselves. Maximum dorsiflexion angle, medial gastrocnemius muscle and fascicle lengths, and Achilles tendon length were calculated at a reference angle of 10 ° plantarflexion, and at maximum dorsiflexion in the pre- and post-stretch trials. FINDINGS All variables were significantly greater during pre- and post-stretch trials compared to the resting angle, and were independent of stretch technique. There was an approximate 10 ° increase in maximum dorsiflexion post-stretch, and this was accounted for by elongation of both muscle (0.8 cm) and tendon (1.0 cm). Muscle fascicle length increased significantly (0.6 cm) from pre- to post-stretch. INTERPRETATION The results provide evidence that commonly used stretching techniques can increase overall muscle, and fascicle lengths immediately post-stretch in children with cerebral palsy.
Big Data Meet Green Challenges: Greening Big Data
Nowadays, there are two significant tendencies, how to process the enormous amount of data, big data, and how to deal with the green issues related to sustainability and environmental concerns. An interesting question is whether there are inherent correlations between the two tendencies in general. To answer this question, this paper firstly makes a comprehensive literature survey on how to green big data systems in terms of the whole life cycle of big data processing, and then this paper studies the relevance between big data and green metrics and proposes two new metrics, effective energy efficiency and effective resource efficiency in order to bring new views and potentials of green metrics for the future times of big data.
Risk communication and informed consent in the medical tourism industry: A thematic content analysis of canadian broker websites
BACKGROUND Medical tourism, thought of as patients seeking non-emergency medical care outside of their home countries, is a growing industry worldwide. Canadians are amongst those engaging in medical tourism, and many are helped in the process of accessing care abroad by medical tourism brokers - agents who specialize in making international medical care arrangements for patients. As a key source of information for these patients, brokers are likely to play an important role in communicating the risks and benefits of undergoing surgery or other procedures abroad to their clientele. This raises important ethical concerns regarding processes such as informed consent and the liability of brokers in the event that complications arise from procedures. The purpose of this article is to examine the language, information, and online marketing of Canadian medical tourism brokers' websites in light of such ethical concerns. METHODS An exhaustive online search using multiple search engines and keywords was performed to compile a comprehensive directory of English-language Canadian medical tourism brokerage websites. These websites were examined using thematic content analysis, which included identifying informational themes, generating frequency counts of these themes, and comparing trends in these counts to the established literature. RESULTS Seventeen websites were identified for inclusion in this study. It was found that Canadian medical tourism broker websites varied widely in scope, content, professionalism and depth of information. Three themes emerged from the thematic content analysis: training and accreditation, risk communication, and business dimensions. Third party accreditation bodies of debatable regulatory value were regularly mentioned on the reviewed websites, and discussion of surgical risk was absent on 47% of the websites reviewed, with limited discussion of risk on the remaining ones. Terminology describing brokers' roles was somewhat inconsistent across the websites. Finally, brokers' roles in follow up care, their prices, and the speed of surgery were the most commonly included business dimensions on the reviewed websites. CONCLUSION Canadian medical tourism brokers currently lack a common standard of care and accreditation, and are widely lacking in providing adequate risk communication for potential medical tourists. This has implications for the informed consent and consequent safety of Canadian medical tourists.
Soft compression and the origins of nonlinear behavior of GaN HEMTs
This work responds to the commonly formulated question in PA design of “Why do GaN HEMTs' nonlinear behavior seems so distinct from the one of their Si LDMOS, GaAs MESFETs or HEMTs counterparts?”. Starting from some recent results on the origin of AM/AM and AM/PM distortion in power amplifiers made for these two device types, we demonstrate that there is no fundamental reason why the devices should be different, except that the formers suffer from a noticeable low-frequency dispersion. Then, we dig into this particular aspect of GaN HEMT operation, to show that this charge trapping-related phenomenon is responsible for a severe self-biasing capable of inducing soft-compression of an otherwise almost flat AM/AM gain plot - when measured with static CW tests - which, in fact, corresponds to a severe gain expansion when the AM/AM is assessed with more realistic dynamic tests performed with real communication signals. And it is this class-C PA like AM/AM that is responsible for the recognized GaN HEMT nonlinearity.
Experimental evaluation of the schunk 5-Finger gripping hand for grasping tasks
In order to perform useful tasks, a service robot needs to manipulate objects in its environment. In this paper, we propose a method for experimental evaluation of the suitability of a robotic hand for grasping tasks in service robotics. The method is applied to the Schunk 5-Finger Gripping Hand, which is a mechatronic gripper designed for service robots. During evaluation, it is shown, that it is able to grasp various common household objects and execute the grasps from the well known Cutkosky grasp taxonomy [1]. The result is, that it is a suitable hand for service robot tasks.
A phase I trial of intravenous infusion of ONYX-015 and enbrel in solid tumor patients
ONYX-015 is an attenuated chimeric human group C adenovirus, which preferentially replicates in and lyses tumor cells that are p53 negative. The purpose of this phase I, dose-escalation study was to determine the safety and feasibility of intravenous infusion with ONYX-015 in combination with enbrel in patients with advanced carcinoma. Enbrel is a recombinant dimer of human tumor-necrosis factor (TNF)-α receptor, previously shown to reduce the level of functional TNF. Nine patients, three in each cohort received multiple cycles of ONYX-015 infusion (1 × 1010, 1 × 1011 and 1 × 1012 vp weekly for 4 weeks/cycle) in addition to subcutaneous enbrel (only during cycle 1) injections per FDA-indicated dosing. Of the nine patients, four had stable disease. No significant adverse events were attributed to the experimental regimen, confirming that enbrel can be safely administered along with oncolytic virotherapy. Two of the three patients in cohort 3 had detectable viral DNA at days 3 and 8 post-ONYX-015 infusion. Their detectable circulating viral DNA was markedly higher during cycle 1 (with enbrel coadministration) as compared with cycle 2 (without enbrel) at the same time points. Area under the curve determinations indicate a marked higher level of TNF-α induction and accelerated clearance at cycle 2 in the absence of enbrel. Further assessment is recommended.
Estimating body shape of dressed humans
The paper presents a method to estimate the detailed 3D body shape of a person even if heavy or loose clothing is worn. The approach is based on a space of human shapes, learned from a large database of registered body scans. Together with this database we use as input a 3D scan or model of the person wearing clothes and apply a fitting method, based on ICP (iterated closest point) registration and Laplacian mesh deformation. The statistical model of human body shapes enforces that the model stays within the space of human shapes. The method therefore allows us to compute the most likely shape and pose of the subject, even if it is heavily occluded or body parts are not visible. Several experiments demonstrate the applicability and accuracy of our approach to recover occluded or missing body parts from 3D laser scans.
Parallel Computation of Skyline and Reverse Skyline Queries Using MapReduce
The skyline operator and its variants such as dynamic skyline and reverse skyline operators have attracted considerable attention recently due to their broad applications. However, computations of such operators are challenging today since there is an increasing trend of applications to deal with big data. For such data-intensive applications, the MapReduce framework has been widely used recently. In this paper, we propose efficient parallel algorithms for processing the skyline and its variants using MapReduce. We first build histograms to effectively prune out non-skyline (non-reverse skyline) points in advance. We next partition data based on the regions divided by the histograms and compute candidate (reverse) skyline points for each region independently using MapReduce. Finally, we check whether each candidate point is actually a (reverse) skyline point in every region independently. Our performance study confirms the effectiveness and scalability of the proposed algorithms.
Radon concentrations in hot spring waters in northern Venezuela.
Concentrations of 222Rn were determined in selected thermal water samples of the northern region of Venezuela. Concentrations in the range of 1}560 Bq/l were found. Soil radon concentrations and air radon concentrations related to the high values of radon concentration in water were investigated in El Castan8 o and at a spa in Las Trincheras. An outstandingly high radon e%ux was found in Las Trincheras with an average soil radon concentration of 122 kBq/m3, and an air radon concentration of 54 kBq/m3 in inhalation treatment pipes. Dose calculations revealed that regular consumption of the measured water samples presents an extra dose of radiation that may range up to 4 mSv/y. ( 1999 Elsevier Science Ltd. All rights reserved.
Fitness functions for searching the Mandelbrot set
The Mandelbrot set is a famous fractal. It serves as the source of a large number of complex mathematical images. Evolutionary computation can be used to search the Mandelbrot set for interesting views. This study compares the results of using several different fitness functions for this search. Some of the fitness functions give substantial control over the appearance of the resulting views while others simply locate parts of the Mandelbrot set in which there are complicated structures. All of the fitness functions are based on finding desirable patterns in the number of iterations of the basic Mandelbrot formula to diverge on a set of points arranged in a regular grid near the boundary of the set. It is shown that using different fitness functions causes an evolutionary algorithm to locate difference types of views into the Mandelbrot set.
Direct Marketing, Indirect Profits: A Strategic Analysis of Dual - Channel Supply - Chain Design
Wei-yu Kevin Chiang • Dilip Chhajed • James D. Hess Department of Information Systems, University of Maryland at Baltimore County, Baltimore, Maryland 21250 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 [email protected][email protected][email protected]
Emergency Department Divert Avoidance Using Petri Nets
Emergency department overcrowding has become a major issue over the past decade. This research investigates emergency department overcrowding resulting ambulance divert. Patient flow models of the emergency department were developed using Petri Nets. The objective of these models is two-fold, to analyze patient flow in the emergency department and to assist in determining the minimum sequence of events leading a hospital from desirable operating state to divert states. A two-phase mixed-integer programming formulation was used to generate the set of minimum sequence of events. This is an on-going research, and the results of this paper shall be used to develop control policies for hospital divert avoidance.
Living Labs: A Bibliometric Analysis
The objective of this study is to understand how Living Lab(s) (LL) as a concept and research approach has developed, proliferated and influenced scholarly research to date. The goal is in assisting both the LL and Action Design Research (ADR) communities in advancing both fields by establishing understanding, commonalities and challenges in advancing both research agendas. We adopt a bibliometric methodology to understand the scholarly impact, contribution and intellectual structure of LL as a new approach to innovation. We conclude with recommendations on advancing both ADR and LL fields of research, highlighting that increased crosscollaboration going forward offers clear opportunities to both fields.
Pfinder: Real-Time Tracking of the Human Body
Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding. Index Terms —Blobs, blob tracking, real-time, person tracking, 3D person tracking, segmentation, gesture recognition, mixture model, MDL.
From machu_picchu to "rafting the urubamba river": anticipating information needs via the entity-query graph
We study the problem of anticipating user search needs, based on their browsing activity. Given the current web page p that a user is visiting we want to recommend a small and diverse set of search queries that are relevant to the content of p, but also non-obvious and serendipitous. We introduce a novel method that is based on the content of the page visited, rather than on past browsing patterns as in previous literature. Our content-based approach can be used even for previously unseen pages. We represent the topics of a page by the set of Wikipedia entities extracted from it. To obtain useful query suggestions for these entities, we exploit a novel graph model that we call EQGraph (Entity-Query Graph), containing entities, queries, and transitions between entities, between queries, as well as from entities to queries. We perform Personalized PageRank computation on such a graph to expand the set of entities extracted from a page into a richer set of entities, and to associate these entities with relevant query suggestions. We develop an efficient implementation to deal with large graph instances and suggest queries from a large and diverse pool. We perform a user study that shows that our method produces relevant and interesting recommendations, and outperforms an alternative method based on reverse IR.
The controlled single particle: A new concept in odd-mass nuclei
Abstract In this paper, we discuss a new concept in studying odd-mass nuclei called “the controlled single particle”. After introducing this new concept, we will use it in reproducing some experimental data of 155 Tb and compare the results. We show that using this concept, we can express some different excited bands, in addition, the ground band in odd-mas nuclei whether the party is changed or not.
The small world of shakespeare's plays.
Drama, at least according to the Aristotelian view, is effective inasmuch as it successfully mirrors real aspects of human behavior. This leads to the hypothesis that successful dramas will portray fictional social networks that have the same properties as those typical of human beings across ages and cultures. We outline a methodology for investigating this hypothesis and use it to examine ten of Shakespeare's plays. The cliques and groups portrayed in the plays correspond closely to those which have been observed in spontaneous human interaction, including in hunter-gatherer societies, and the networks of the plays exhibit "small world" properties of the type which have been observed in many human-made and natural systems.
Impact of Culture on Human Resource Management Practices : A 10-Country Comparison
Le ModeÁ le de Culture Fit explique la manieÁ re dont l'environnement socioculturel influence la culture interne au travail et les pratiques de la direction des ressources humaines. Ce modeÁ le a e te teste sur 2003 salarie s d'entreprises prive es dans 10 pays. Les participants ont rempli un questionnaire de 57 items, destine aÁ mesurer les perceptions de la direction sur 4 dimensions socioculturelles, 6 dimensions de culture interne au travail, et les pratiques HRM (Management des Ressources Humaines) dans 3 zones territoiriales. Une analyse ponde re e par re gressions multiples, au niveau individuel, a montre que les directeurs qui caracte risaient leurs environnement socio-culturel de facË on fataliste, supposaient aussi que les employe s n'e taient pas malle ables par nature. Ces directeurs ne pratiquaient pas l'enrichissement des postes et donnaient tout pouvoir au controà le et aÁ la re mune ration en fonction des performances. Les directeurs qui appre ciaient une grande loyaute des APPLIED PSYCHOLOGY: AN INTERNATIONAL REVIEW, 2000, 49 (1), 192±221
Applying U.S. Employment Discrimination Laws to International Employers
The question of whether U.S. employment discrimination laws apply to international employers is complex and involves multiple sources of legal authority including U.S. statutes, international treaties, and the laws of non-American host countries. This article provides detailed and simplifying guidance to assist employers in working through that complexity. Based on an examination of 98 federal courts cases, this article identifies and explains 8 general guidelines for determining when U.S. laws apply to international employers (e.g., U.S. employees working abroad or “foreign” employees working in the United States). These guidelines are incorporated into an organizing framework or “decision tree” that leads employers through the various decisions that must be made to determine whether U.S. discrimination laws apply in a wide range of international employment situations. Guidance for industrial and organizational (I-O) psychologists who advise international employers is provided and summarized in terms of general recommendations and conclusions.
Hip abductor control in walking following stroke -- the immediate effect of canes, taping and TheraTogs on gait.
OBJECTIVE To confirm previous findings that hip abductor activity measured by electromyography (EMG) on the side contralateral to cane use is reduced during walking in stroke patients. To assess whether an orthosis (TheraTogs) or hip abductor taping increase hemiplegic hip abductor activity compared with activity during cane walking or while walking without aids. To investigate the effect of each condition on temporo-spatial gait parameters. DESIGN Randomized, within-participant experimental study. SETTING Gait laboratory. SUBJECTS Thirteen patients following first unilateral stroke. INTERVENTION Data collection over six gait cycles as subjects walked at self-selected speed during: baseline (without aids) and in randomized order with (1) hip abductor taping, (2) TheraTogs, (3) cane in non-hemiplegic hand. MAIN MEASURES Peak EMG of gluteus medius and tensor fascia lata and temporo-spatial gait parameters. RESULTS Cane use reduced EMG activity in gluteus medius from baseline by 21.86%. TheraTogs increased it by 16.47% (change cane use-TheraTogs P = 0.001, effect size = -0.5) and tape by 5.8% (change cane use-tape P = 0.001, effect size = -0.46). In tensor fascia lata cane use reduced EMG activity from baseline by 19.14%. TheraTogs also reduced EMG activity from baseline by 1.10% (change cane use-TheraTogs P = 0.009, effect size -0.37) and tape by 3% (not significant). Gait speed (m/s) at: baseline 0.44, cane use 0.45, tape 0.48, TheraTogs 0.49. CONCLUSION Hip abductor taping and TheraTogs increase hemiplegic hip abductor activity and gait speed during walking compared with baseline and cane use.
Inertial Motion Capture Costume Design Study
The paper describes a scalable, wearable multi-sensor system for motion capture based on inertial measurement units (IMUs). Such a unit is composed of accelerometer, gyroscope and magnetometer. The final quality of an obtained motion arises from all the individual parts of the described system. The proposed system is a sequence of the following stages: sensor data acquisition, sensor orientation estimation, system calibration, pose estimation and data visualisation. The construction of the system's architecture with the dataflow programming paradigm makes it easy to add, remove and replace the data processing steps. The modular architecture of the system allows an effortless introduction of a new sensor orientation estimation algorithms. The original contribution of the paper is the design study of the individual components used in the motion capture system. The two key steps of the system design are explored in this paper: the evaluation of sensors and algorithms for the orientation estimation. The three chosen algorithms have been implemented and investigated as part of the experiment. Due to the fact that the selection of the sensor has a significant impact on the final result, the sensor evaluation process is also explained and tested. The experimental results confirmed that the choice of sensor and orientation estimation algorithm affect the quality of the final results.
An Areal Rainfall Estimator Using Differential Propagation Phase: Evaluation Using a C-Band Radar and a Dense Gauge Network in the Tropics
An areal rainfall estimator based on differential propagation phase is proposed and evaluated using the Bureau of Meteorology Research Centre (BMRC) C-POL radar and a dense gauge network located near Darwin, Northern Territory, Australia. Twelve storm events during the summer rainy season (December 1998–March 1999) are analyzed and radar–gauge comparisons are evaluated in terms of normalized error and normalized bias. The areal rainfall algorithm proposed herein results in normalized error of 14% and normalized bias of 5.6% for storm total accumulation over an area of around 100 km2. Both radar measurement error and gauge sampling error are minimized substantially in the areal accumulation comparisons. The high accuracy of the radar-based method appears to validate the physical assumptions about the rain model used in the algorithm, primarily a gamma form of the drop size distribution model, an axis ratio model that accounts for transverse oscillations for D # 4 mm and equilibrium shapes for D . 4 mm, and a Gaussian canting angle distribution model with zero mean and standard deviation 108. These assumptions appear to be valid for tropical rainfall.
Differential investment and costs during avian incubation determined by individual quality: an experimental study of the common eider (Somateria mollissima).
Individuals of different quality may have different investment strategies, shaping responses to experimental manipulations, thereby rendering the detection of such patterns difficult. However, previous clutch-size manipulation studies have infrequently incorporated individual differences in quality. To examine costs of incubation and reproductive investment in relation to changes in clutch size, we enlarged and reduced natural clutch sizes of four and five eggs by one egg early in the incubation period in female common eiders (Somateria mollissima), a sea duck with an anorectic incubation period. Females that had produced four eggs (lower quality) responded to clutch reductions by deserting the nest more frequently but did not increase incubation effort in response to clutch enlargement, at the cost of reduced hatch success of eggs. Among birds with an original clutch size of five (higher quality), reducing and enlarging clutch size reduced and increased relative body mass loss respectively without affecting hatch success. In common eiders many females abandon their own ducklings to the care of other females. Enlarging five-egg clutches led to increased brood care rate despite the higher effort spent incubating these clutches, indicating that the higher fitness value of a large brood is increasing adult brood investment. This study shows that the ability to respond to clutch-size manipulations depends on original clutch size, reflecting differences in female quality. Females of low quality were reluctant to increase investment at the cost of lower hatch success, whereas females of higher quality apparently have a larger capacity both to increase incubation effort and brood care investment.
Distractor Generation with Generative Adversarial Nets for Automatically Creating Fill-in-the-blank Questions
Distractor generation is a crucial step for fill-in-the-blank question generation. We propose a generative model learned from training generative adversarial nets (GANs) to create useful distractors. Our method utilizes only context information and does not use the correct answer, which is completely different from previous Ontology-based or similarity-based approaches. Trained on the Wikipedia corpus, the proposed model is able to predict Wiki entities as distractors. Our method is evaluated on two biology question datasets collected from Wikipedia and actual college-level exams. Experimental results show that our context-based method achieves comparable performance to a frequently used word2vec-based method for the Wiki dataset. In addition, we propose a second-stage learner to combine the strengths of the two methods, which further improves the performance on both datasets, with 51.7% and 48.4% of generated distractors being acceptable.
Deep Edge Guided Recurrent Residual Learning for Image Super-Resolution
In this paper, we consider the image super-resolution (SR) problem. The main challenge of image SR is to recover high-frequency details of a low-resolution (LR) image that are important for human perception. To address this essentially ill-posed problem, we introduce a Deep Edge Guided REcurrent rEsidual (DEGREE) network to progressively recover the high-frequency details. Different from most of the existing methods that aim at predicting high-resolution (HR) images directly, the DEGREE investigates an alternative route to recover the difference between a pair of LR and HR images by recurrent residual learning. DEGREE further augments the SR process with edge-preserving capability, namely the LR image and its edge map can jointly infer the sharp edge details of the HR image during the recurrent recovery process. To speed up its training convergence rate, by-pass connections across the multiple layers of DEGREE are constructed. In addition, we offer an understanding on DEGREE from the view-point of sub-band frequency decomposition on image signal and experimentally demonstrate how the DEGREE can recover different frequency bands separately. Extensive experiments on three benchmark data sets clearly demonstrate the superiority of DEGREE over the well-established baselines and DEGREE also provides new state-of-the-arts on these data sets. We also present addition experiments for JPEG artifacts reduction to demonstrate the good generality and flexibility of our proposed DEGREE network to handle other image processing tasks.
Side gate HiGT with low dv/dt noise and low loss
This paper presents a novel side gate HiGT (High-conductivity IGBT) that incorporates historical changes of gate structures for planar and trench gate IGBTs. Side gate HiGT has a side-wall gate, and the opposite side of channel region for side-wall gate is covered by a thick oxide layer to reduce Miller capacitance (Cres). In addition, side gate HiGT has no floating p-layer, which causes the excess Vge overshoot. The proposed side gate HiGT has 75% smaller Cres than the conventional trench gate IGBT. The excess Vge overshoot during turn-on is effectively suppressed, and Eon + Err can be reduced by 34% at the same diode's recovery dv/dt. Furthermore, side gate HiGT has sufficiently rugged RBSOA and SCSOA.
JPE 11-4-3 Control and Analysis of an Integrated Bidirectional DC / AC and DC / DC Converters for Plug-In Hybrid Electric Vehicle Applications
The plug-in hybrid electric vehicles (PHEVs) are specialized hybrid electric vehicles that have the potential to obtain enough energy for average daily commuting from batteries. The PHEV battery would be recharged from the power grid at home or at work and would thus allow for a reduction in the overall fuel consumption. This paper proposes an integrated power electronics interface for PHEVs, which consists of a novel Eight-Switch Inverter (ESI) and an interleaved DC/DC converter, in order to reduce the cost, the mass and the size of the power electronics unit (PEU) with high performance at any operating mode. In the proposed configuration, a novel Eight-Switch Inverter (ESI) is able to function as a bidirectional single-phase AC/DC battery charger/ vehicle to grid (V2G) and to transfer electrical energy between the DC-link (connected to the battery) and the electric traction system as DC/AC inverter. In addition, a bidirectional-interleaved DC/DC converter with dual-loop controller is proposed for interfacing the ESI to a low-voltage battery pack in order to minimize the ripple of the battery current and to improve the efficiency of the DC system with lower inductor size. To validate the performance of the proposed configuration, the indirect field-oriented control (IFOC) based on particle swarm optimization (PSO) is proposed to optimize the efficiency of the AC drive system in PHEVs. The maximum efficiency of the motor is obtained by the evaluation of optimal rotor flux at any operating point, where the PSO is applied to evaluate the optimal flux. Moreover, an improved AC/DC controller based Proportional-Resonant Control (PRC) is proposed in order to reduce the THD of the input current in charger/V2G modes. The proposed configuration is analyzed and its performance is validated using simulated results obtained in MATLAB/ SIMULINK. Furthermore, it is experimentally validated with results obtained from the prototypes that have been developed and built in the laboratory based on TMS320F2808 DSP.
A study of free air ball formation in palladium-coated copper and bare copper bonding wire
Due to the continuous rise in gold prices, copper (Cu) wire has become an alternative to gold (Au) wire in the field of LSI interconnection. The characteristics of Free Air Ball (FAB), which is used for the ball bonding, give a large influence on the bond reliability. Understanding the mechanism of the FAB formation is an important step to further the application of Cu wire, including the next generation high density packaging and in-automotive IC applications. In this study, the FAB formation process was investigated in detail employing the high speed camera as a means of visualization. The studies verified that the electronic flame-off (EFO) conditions have significant effects on plasma characteristics, initiation of wire tip melting, and rate of ball rising. Further study was performed on the formation process of the irregular FABs, such as off-centered FABs and pointed FABs, which have been a matter of concern in the ball bonding; the camera analyses showed that the ball-tilting occurs from the initial stage of the wire tip melting while the transition into the pointed ball takes place during the solidification process. The formation processes of the FABs elucidated from the experiments are useful for the consideration of the stable FAB formation.
Bead: Explorations in Information Visualization
We describe work on the visualization of bibliographic data and, to aid in this task, the application of numerical techniques for multidimensional scaling. Many areas of scientific research involve complex multivariate data. One example of this is Information Retrieval. Document comparisons may be done using a large number of variables. Such conditions do not favour the more well-known methods of visualization and graphical analysis, as it is rarely feasible to map each variable onto one aspect of even a three-dimensional, coloured and textured space. Bead is a prototype system for the graphically-based exploration of information. In this system, articles in a bibliography are represented by particles in 3-space. By using physically-based modelling techniques to take advantage of fast methods for the approximation of potential fields, we represent the relationships between articles by their relative spatial positions. Inter-particle forces tend to make similar articles move closer to one another and dissimilar ones move apart. The result is a 3D scene which can be used to visualize patterns in the high-D information space.
25-Gbps×4 optical transmitter with adjustable asymmetric pre-emphasis in 65-nm CMOS
This paper describes the design and experiment results of 25 Gbps, 4 channels optical transmitter which consist of a vertical-cavity surface emitting laser (VCSEL) driver with an asymmetric pre-emphasis circuit and an electrical receiver. To make data transfers faster in directly modulated a VCSEL-based optical communications, the driver circuit requires an asymmetric pre-emphasis signal to compensate for the nonlinear characteristics of VCSEL. An asymmetric pre-emphasis signal can be created by the adjusting a duty ratio with a delay circuit. A test chip was fabricated in the 65-nm standard CMOS process and demonstrated. An experimental evaluation showed that this transmitter enlarged the eye opening of a 25 Gbps, PRBS=29-1 test signal by 8.8% and achieve four channels fully optical link with an optical receiver at a power of 10.3 mW=Gbps=ch at 25 Gbps.
Neural networks designing neural networks: Multi-objective hyper-parameter optimization
Artificial neural networks have gone through a recent rise in popularity, achieving state-of-the-art results in various fields, including image classification, speech recognition, and automated control. Both the performance and computational complexity of such models are heavily dependant on the design of characteristic hyper-parameters (e.g., number of hidden layers, nodes per layer, or choice of activation functions), which have traditionally been optimized manually. With machine learning penetrating low-power mobile and embedded areas, the need to optimize not only for performance (accuracy), but also for implementation complexity, becomes paramount. In this work, we present a multi-objective design space exploration method that reduces the number of solution networks trained and evaluated through response surface modelling. Given spaces which can easily exceed 1020 solutions, manually designing a near-optimal architecture is unlikely as opportunities to reduce network complexity, while maintaining performance, may be overlooked. This problem is exacerbated by the fact that hyper-parameters which perform well on specific datasets may yield sub-par results on others, and must therefore be designed on a per-application basis. In our work, machine learning is leveraged by training an artificial neural network to predict the performance of future candidate networks. The method is evaluated on the MNIST and CIFAR-10 image datasets, optimizing for both recognition accuracy and computational complexity. Experimental results demonstrate that the proposed method can closely approximate the Pareto-optimal front, while only exploring a small fraction of the design space.
Diversity driven Attention Model for Query-based Abstractive Summarization
Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.ive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.
Soybean Plant Disease Identification Using Convolutional Neural Network
Plants have become an important source of energy, and are a fundamental piece in the puzzle to solve the problem of global warming. However, plant diseases are threatening the livelihood of this important source. Convolutional neural networks (CNN) have demonstrated great performance (beating that of humans) in object recognition and image classification problems. This paper describes the feasibility of CNN for plant disease classification for leaf images taken under the natural environment. The model is designed based on the LeNet architecture to perform the soybean plant disease classification. 12,673 samples containing leaf images of four classes, including the healthy leaf images, were obtained from the PlantVillage database. The images were taken under uncontrolled environment. The implemented model achieves 99.32% classification accuracy which show clearly that CNN can extract important features and classify plant diseases from images taken in the natural environment.
SCADA security: a review and enhancement for DNP3 based systems
Supervisory control and data acquisition (SCADA) systems are large-scale industrial control systems often spread across geographically dispersed locations that let human operators control entire physical systems, from a single control room. Early multi-site SCADA systems used closed networks and propriety industrial communication protocols like Modbus, DNP3 etc to reach remote sites. But with time it has become more convenient and more cost-effective to connect them to the Internet. However, internet connections to SCADA systems build in new vulnerabilities, as SCADA systems were not designed with internet security in mind. This can become matter of national security if these systems are power plants, water treatment facilities, or other pieces of critical infrastructure. Compared to IT systems, SCADA systems have a higher requirement concerning reliability, latency and uptime, so it is not always feasible to apply IT security measures deployed in IT systems. This paper provides an overview of security issues and threats in SCADA networks. Next, attention is focused on security assessment of the SCADA. This is followed by an overview of relevant SCADA security solutions. Finally we propose our security solution approach which is embedded in bump-in-the-wire is discussed.
Quantitative evaluation of LDDMM, FreeSurfer, and CARET for cortical surface mapping
Cortical surface mapping has been widely used to compensate for individual variability of cortical shape and topology in anatomical and functional studies. While many surface mapping methods were proposed based on landmarks, curves, spherical or native cortical coordinates, few studies have extensively and quantitatively evaluated surface mapping methods across different methodologies. In this study we compared five cortical surface mapping algorithms, including large deformation diffeomorphic metric mapping (LDDMM) for curves (LDDMM-curve), for surfaces (LDDMM-surface), multi-manifold LDDMM (MM-LDDMM), FreeSurfer, and CARET, using 40 MRI scans and 10 simulated datasets. We computed curve variation errors and surface alignment consistency for assessing the mapping accuracy of local cortical features (e.g., gyral/sulcal curves and sulcal regions) and the curvature correlation for measuring the mapping accuracy in terms of overall cortical shape. In addition, the simulated datasets facilitated the investigation of mapping error distribution over the cortical surface when the MM-LDDMM, FreeSurfer, and CARET mapping algorithms were applied. Our results revealed that the LDDMM-curve, MM-LDDMM, and CARET approaches best aligned the local curve features with their own curves. The MM-LDDMM approach was also found to be the best in aligning the local regions and cortical folding patterns (e.g., curvature) as compared to the other mapping approaches. The simulation experiment showed that the MM-LDDMM mapping yielded less local and global deformation errors than the CARET and FreeSurfer mappings.
Calculation of transient puffer pressure rise takes mechanical compression, nozzle ablation, and arc energy into consideration
Thermal puffer-type gas circuit breaker (GCB) has a high dielectric and current interruption capability. In order to design a good thermal puffer GCB, it is important to know the blast pressure for arc cooling. Although pressure calculation programs have been developed and used for design work, the basic characteristics, such as contribution of nozzle ablation gas to puffer pressure rise, amount of back flow gas to puffer chamber, and pressure distribution along gas passages during current interruption, are not well known. In this paper, pressure rise, mass flow, and temperature calculations were carried out using a new calculation model, which takes mechanical compression by puffer piston, nozzle ablation in the nozzle throat and arc energy into consideration. By analysis of the calculation results, we found the pressure rise mechanism is as follows. While fixed contact located in the divergent part of nozzle, all of the ablation gas generated from the nozzle wall cannot be exhausted from the nozzle and it leads to high-pressure generation in the nozzle throat. This pressure causes transfer of hot ablation gas back to the puffer chamber via gas passage. The puffer pressure increases thermally due to temperature rise by this mechanism. At a longer arcing time, as high puffer pressure was already established in the puffer chamber, the nozzle ablation gas cannot flow back to the puffer chamber. Besides as mass flow through nozzle is limited by low gas density, the puffer pressure rise is obtained by the mechanical compression of puffer piston.
ScreenerNet: Learning Self-Paced Curriculum for Deep Neural Networks
We propose to learn a curriculum or a syllabus for deep reinforcement learning and supervised learning with deep neural networks by an attachable deep neural network, called ScreenerNet. Specifically, we learn a weight for each sample by jointly training the ScreenerNet and the main network in an end-to-end selfpaced fashion. The ScreenerNet has neither sampling bias nor memory for the past learning history. We show the networks augmented with the ScreenerNet converge faster with better accuracy than the state-of-the-art curricular learning methods in extensive experiments of a Cart-pole task using Deep Q-learning and supervised visual recognition task using three vision datasets such as Pascal VOC2012, CIFAR10, and MNIST. Moreover, the ScreenerNet can be combined with other curriculum learning methods such as Prioritized Experience Replay (PER) for further accuracy improvement.
Introduction to the Special Issue: Commemorating Guilford's 1950 Presidential Address
This entire issue of the Creativity Research Journal (CRJ) is devoted to the work of J. P. Guilford. In addition to the two introductions, this issue contains theoretical articles honoring Guilford, followed by empirical investigations that use his ideas or techniques. Some of the empirical articles were not prepared specifically for this issue; they nonetheless honor Guilford by showing his impact on the field of creative studies. Guilford’s (1950) presidential address to the American Psychological Association often is credited with initiating the empirical research on creativity. As Plucker (2000–2001) points out in the second introduction to this special issue, Guilford himself spread the credit out to others working in the field and to the sociocultural context. Indeed, many important empirical efforts preceded Guilford’s. One historical review of creative studies was presented by Albert and Runco (1999). Becker (1995) described the 19th-century antecedents to creative studies. Most recently, Runco (1999) presented a chronology of events and publications that have somehow each been significant for creativity research. Even with numerous earlier efforts, Guilford (1950) himself did a great deal for this field. He argued cogently that we should study creativity, for example, and gave us techniques that could be used to do so. His work was so influential that, although he often is given a great deal of credit, at the same time he also is probably cited less than he should be. He is one of those persons whose work is a staple, and as such it is often taken for granted (and uncited). Consider in this regard his work on divergent thinking. He really pushed this line of work along—although, again, to be perfectly accurate there were earlier empirical studies of divergent thinking (reviewed by Runco, in press)—but Guilford sometimes is not cited in investigations that use measures adapted from his early work. Historiometricians realize that this sometimes happens: Citations are not always a good indicator of impact, because some people are so well known that they are sometimes mentioned but not cited. Guilford is of that stature in creative studies. The work on divergent thinking is represented in the second half of this issue. Be forewarned: A few articles commemorating Guilford will appear in the next issue. I also expect that Guilford’s ideas and work will be cited, discussed, applied, and honored for a long time to come, in CRJ and elsewhere.
Applying business intelligence innovations to emergency management.
The use of business intelligence (BI) is common among corporations in the private sector to improve business decision making and create insights for competitive advantage. Increasingly, emergency management agencies are using tools and processes similar to BI systems. With a more thorough understanding of the principles of BI and its supporting technologies, and a careful comparison to the business model of emergency management, this paper seeks to provide insights into how lessons from the private sector can contribute to the development of effective and efficient emergency management BI utilisation.
Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks
Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance. However, little is published which parameters and design choices should be evaluated or selected making the correct hyperparameter optimization often a “black art that requires expert experiences” (Snoek et al., 2012). In this paper, we evaluate the importance of different network design choices and hyperparameters for five common linguistic sequence tagging tasks (POS, Chunking, NER, Entity Recognition, and Event Detection). We evaluated over 50.000 different setups and found, that some parameters, like the pre-trained word embeddings or the last layer of the network, have a large impact on the performance, while other parameters, for example the number of LSTM layers or the number of recurrent units, are of minor importance. We give a recommendation on a configuration that performs well among different tasks. The optimized implementation of our BiLSTM-CRF architecture is publicly available.1 This publication explains in detail the experimental setup and discusses the results. A condensed version of this paper was presented at EMNLP 2017 (Reimers and Gurevych, 2017).2
Gaussian Prototypical Networks for Few-Shot Learning on Omniglot
We propose a novel architecture for k-shot classification on the Omniglot dataset. Building on prototypical networks, we extend their architecture to what we call Gaussian prototypical networks. Prototypical networks learn a map between images and embedding vectors, and use their clustering for classification. In our model, a part of the encoder output is interpreted as a confidence region estimate about the embedding point, and expressed as a Gaussian covariance matrix. Our network then constructs a direction and class dependent distance metric on the embedding space, using uncertainties of individual data points as weights. We show that Gaussian prototypical networks are a preferred architecture over vanilla prototypical networks with an equivalent number of parameters. We report state-ofthe-art performance in 1-shot and 5-shot classification both in 5-way and 20-way regime (for 5-shot 5-way, we are comparable to previous state-of-the-art) on the Omniglot dataset. We explore artificially down-sampling a fraction of images in the training set, which improves our performance even further. We therefore hypothesize that Gaussian prototypical networks might perform better in less homogeneous, noisier datasets, which are commonplace in real world applications.
Perception Markup Language: Towards a Standardized Representation of Perceived Nonverbal Behaviors
Modern virtual agents require knowledge about their environment, the interaction itself, and their interlocutors’ behavior in order to be able to show appropriate nonverbal behavior as well as to adapt dialog policies accordingly. Recent achievements in the area of automatic behavior recognition and understanding can provide information about the interactants’ multimodal nonverbal behavior and subsequently their affective states. In this paper, we introduce a perception markup language (PML) which is a first step towards a standardized representation of perceived nonverbal behaviors. PML follows several design concepts, namely compatibility and synergy, modeling uncertainty, multiple interpretative layers, and extensibility, in order to maximize its usefulness for the research community. We show how we can successfully integrate PML in a fully automated virtual agent system for healthcare applications.
Ayahuasca as Antidepressant ? Psychedelics and Styles of Reasoning in
There is a growing interest among scientists and the lay public alike in using the South American psychedelic brew, ayahuasca, to treat psychiatric disorders like depression and anxiety. Such a practice is controversial due to a style of reasoning within conventional psychiatry that sees psychedelic-induced modified states of consciousness as pathological. This article analyzes the academic literature on ayahuasca’s psychological effects to determine how this style of reasoning is shaping formal scientific discourse on ayahuasca’s therapeutic potential as a treatment for depression and anxiety. Findings from these publications suggest that different kinds of experiments are differentially affected by this style of reasoning but can nonetheless indicate some potential therapeutic utility of the ayahuasca-induced modified state of consciousness. The article concludes by suggesting ways in which conventional psychiatry’s dominant style of reasoning about psychedelic modified states of consciousness could be reconsidered. k e yword s : ayahuasca, psychedelic, hallucinogen, psychiatry, depression
The effect of site quality on repurchase intention in Internet shopping through mediating variables: The case of university students in South Korea
We performed a study to determine the influence that site quality has on repurchase intention of Internet shopping through customer satisfaction, customer trust, and customer commitment. Appropriate measures were developed and tested on 230 university students of Gyeongnam province in South Korea with a cross-sectional questionnaire survey. The results of the empirical analysis confirmed that site quality can be conceptualized as a composite of six dimensions of shopping convenience, site design, information usefulness, transaction security, payment system, and customer communication. Second, site quality positively affected customer satisfaction and customer trust, but did not affect customer commitment and repurchase intention. Third, site quality can affect repurchase intention by enhancing or attenuating customer satisfaction, customer trust, and customer commitment in online transaction situation. The mediating effect of customer satisfaction, customer trust, and customer commitment between site quality and repurchase intention is identified. Fourth, site quality indirectly affected customer commitment through customer satisfaction. Customer satisfaction indirectly affected repurchase intention through customer trust and customer commitment. Thus, it is found that site quality can be a very important factor to enhance repurchase intention in the customer perspective. © 2013 Elsevier Ltd. All rights reserved.
Vector graphics complexes
Basic topological modeling, such as the ability to have several faces share a common edge, has been largely absent from vector graphics. We introduce the vector graphics complex (VGC) as a simple data structure to support fundamental topological modeling operations for vector graphics illustrations. The VGC can represent any arbitrary non-manifold topology as an immersion in the plane, unlike planar maps which can only represent embeddings. This allows for the direct representation of incidence relationships between objects and can therefore more faithfully capture the intended semantics of many illustrations, while at the same time keeping the geometric flexibility of stacking-based systems. We describe and implement a set of topological editing operations for the VGC, including glue, unglue, cut, and uncut. Our system maintains a global stacking order for all faces, edges, and vertices without requiring that components of an object reside together on a single layer. This allows for the coordinated editing of shared vertices and edges even for objects that have components distributed across multiple layers. We introduce VGC-specific methods that are tailored towards quickly achieving desired stacking orders for faces, edges, and vertices.
Hate Online : A Content Analysis of Extremist Internet Sites
Extremists, such as hate groups espousing racial supremacy or separation, have established an online presence. A content analysis of 157 extremist web sites selected through purposive sampling was conducted using two raters per site. The sample represented a variety of extremist groups and included both organized groups and sites maintained by apparently unaffiliated individuals. Among the findings were that the majority of sites contained external links to other extremist sites (including international sites), that roughly half the sites included multimedia content, and that half contained racist symbols. A third of the sites disavowed racism or hatred, yet one third contained material from supremacist literature. A small percentage of sites specifically urged violence. These and other findings suggest that the Internet may be an especially powerful tool for extremists as a means of reaching an international audience, recruiting members, linking diverse extremist groups, and allowing maximum image control.
Facial Expression Recognition Using Enhanced Deep 3 D Convolutional Neural Networks
Deep Neural Networks (DNNs) have shown to outperform traditional methods in various visual recognition tasks including Facial Expression Recognition (FER). In spite of efforts made to improve the accuracy of FER systems using DNN, existing methods still are not generalizable enough in practical applications. This paper proposes a 3D Convolutional Neural Network method for FER in videos. This new network architecture consists of 3D Inception-ResNet layers followed by an LSTM unit that together extracts the spatial relations within facial images as well as the temporal relations between different frames in the video. Facial landmark points are also used as inputs to our network which emphasize on the importance of facial components rather than the facial regions that may not contribute significantly to generating facial expressions. Our proposed method is evaluated using four publicly available databases in subject-independent and cross-database tasks and outperforms state-of-the-art methods.
An Analysis of Buck Converter Efficiency in PWM / PFM Mode with Simulink
This technical paper takes a study into efficiency comparison between PWM and PFM control modes in DC-DC buck converters. Matlab Simulink Models are built to facilitate the analysis of various effects on power loss and converting efficiency, including different load conditions, gate switching frequency, setting of voltage and current thresholds, etc. From efficiency vs. load graph, a best switching frequency is found to achieve a good efficiency throughout the wide load range. This simulation point is then compared to theoretical predictions, justifying the effectiveness of computer based simulation. Efficiencies at two different control modes are compared to verify the improvement of PFM scheme.
Are we ready for autonomous driving? The KITTI vision benchmark suite
Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.
Wetbrush: GPU-based 3D painting simulation at the bristle level
We present a real-time painting system that simulates the interactions among brush, paint, and canvas at the bristle level. The key challenge is how to model and simulate sub-pixel paint details, given the limited computational resource in each time step. To achieve this goal, we propose to define paint liquid in a hybrid fashion: the liquid close to the brush is modeled by particles, and the liquid away from the brush is modeled by a density field. Based on this representation, we develop a variety of techniques to ensure the performance and robustness of our simulator under large time steps, including brush and particle simulations in non-inertial frames, a fixed-point method for accelerating Jacobi iterations, and a new Eulerian-Lagrangian approach for simulating detailed liquid effects. The resulting system can realistically simulate not only the motions of brush bristles and paint liquid, but also the liquid transfer processes among different representations. We implement the whole system on GPU by CUDA. Our experiment shows that artists can use the system to draw realistic and vivid digital paintings, by applying the painting techniques that they are familiar with but not offered by many existing systems.
Adhesion molecules in different treatments of acute myocardial infarction
BACKGROUND Tissue damage after ischemia and reperfusion involves leukocyte endothelial interactions mediated by cell adhesion molecules. This study was designed to determine the time course of soluble adhesion molecules in patients with acute myocardial infarction after attempted reperfusion by thrombolysis with tissue plasminogen activator (tPA) or streptokinase (SK), or percutaneous transluminal coronary angioplasty (PTCA). METHODS In 3 x 10 randomly selected patients with acute myocardial infarction undergoing thrombolysis with tPA or SK, or treated with PTCA, plasma concentrations of soluble L-selectin, P-selectin, E-selectin, intercellular adhesion molecule-1 (ICAM-1), vascular cell adhesion molecule-1 (VCAM-1) and platelet endothelial cell adhesion molecule-1 (PECAM-1) were measured by enzyme-linked immunosorbent assay, 30 min and 1, 2, 4, 8, 12 and 24 hours after intervention. RESULTS After thrombolysis with tPA, soluble L-selectin concentrations were persistently depressed and soluble PECAM-1 concentrations were elevated, compared with controls, SK and PTCA. While soluble VCAM-1 concentrations did not differ within the first hours after interventions between the three groups, soluble VCAM-1 rose by 24 hours after tPA thrombolysis but did not increase after SK and PTCA treatment. Soluble ICAM-1 concentrations were consistently elevated after PTCA compared with controls and thrombolysed patients. Soluble E-selectin was depressed after tPA thrombolysis and PTCA in comparison with controls, while the SK group showed an increase throughout the observation period. Soluble P-selectin was increased after PTCA and SK lysis up to 8 hours after treatment compared with controls, but no significant differences could be found between treatment groups. CONCLUSION Adhesion molecules mediating leukocyte endothelial interactions are altered subsequent to postischemic reperfusion and by treatment with thrombolytic agents and angioplasty. The clinical relevance of these biological changes remains to be determined.
Antibiotics in addition to systemic corticosteroids for acute exacerbations of chronic obstructive pulmonary disease.
RATIONALE The role of antibiotics in acute exacerbations is controversial and their efficacy when added to systemic corticosteroids is unknown. OBJECTIVES We conducted a randomized, placebo-controlled trial to determine the effects of doxycycline in addition to corticosteroids on clinical outcome, microbiological outcome, lung function, and systemic inflammation in patients hospitalized with an acute exacerbation of chronic obstructive pulmonary disease. METHODS Of 223 patients, we enrolled 265 exacerbations defined on the basis of increased dyspnea and increased sputum volume with or without increased sputum purulence. Patients received 200 mg of oral doxycycline or matching placebo for 7 days in addition to systemic corticosteroids. Clinical and microbiological response, time to treatment failure, lung function, symptom scores, and serum C-reactive protein were assessed. MEASUREMENTS AND MAIN RESULTS On Day 30, clinical success was similar in intention-to-treat patients (odds ratio, 1.3; 95% confidence interval, 0.8 to 2.0) and per-protocol patients. Doxycycline showed superiority over placebo in terms of clinical success on Day 10 in intention-to-treat patients (odds ratio, 1.9; 95% confidence interval, 1.1 to 3.2), but not in per-protocol patients. Doxycycline was also superior in terms of clinical cure on Day 10, microbiological outcome, use of open label antibiotics, and symptoms. There was no interaction between the treatment effect and any of the subgroup variables (lung function, type of exacerbation, serum C-reactive protein, and bacterial presence). CONCLUSIONS Although equivalent to placebo in terms of clinical success on Day 30, doxycycline showed superiority in terms of clinical success and clinical cure on Day 10, microbiological success, the use of open label antibiotics, and symptoms. Clinical trial registered with www.clinicaltrials.gov (NCT00170222).
Towards interactive robots in autism therapy
This article discusses the potential of using interactive environments in autism therapy. We specifically address issues relevant to the Aurora project, which studies the possible role of autonomous, mobile robots as therapeutic tools for children with autism. Theories of mindreading, social cognition and imitation that informed the Aurora project are discussed and their relevance to the project is outlined. Our approach is put in the broader context of socially intelligent agents and interactive environments. We summarise results from trials with a particular mobile robot. Finally, we draw some comparisons to research on interactive virtual environments in the context of autism therapy and education. We conclude by discussing future directions and open issues.
Ketanserin, an Effective Third-Line Agent in Primary Hypertension
Thiazide diuretics and t/-adrenoceptor antagonists remain the cornerstone of therapy in the stepped care approach to the treatment of hypertension (Joint National Committee on Detection, Evaluation, and Treatment of High Blood Pressure 1984). There is less consensus, however, as to the most appropriate third-line agent when this combination fails to control the blood pressure (McAreavey et al. 1984; Maclean et al. 1986; Ramsay et al. 1987). Thus, other agents need to be evaluated for this potential role. Ketanserin is a serotonin S2-receptor antagonist which also has weak a-adrenoceptor-blocking properties (Van Nueten et al. 1981). This combination of vasodilator properties makes ketanserin a potentially useful antihypertensive agent, whether used alone (Woittiez et al. 1986) or in conjunction with either thiazides or t/-blockers (Beretta-Piccoli et al. 1987). Ketanserin is also said to have a 'favourable' effect on serum lipoproteins. to reduce platelet activity, and to reduce the incidence of cardiovascular events in high risk patients (Hansson & Hedner 1987). Despite these properties, ketanserin has yet to find a role in the treatment of hypertension. The present study was designed to assess the value of ketanserin as a third-line agent in patients whose blood pressure was not adequately controlled by treatment with a fixed combination of ate nolo I and chlorthalidone.
Dynamics of Spiking Neurons Connected by Both Inhibitory and Electrical Coupling
We study the dynamics of a pair of intrinsically oscillating leaky integrate-and-fire neurons (identical and noise-free) connected by combinations of electrical and inhibitory coupling. We use the theory of weakly coupled oscillators to examine how synchronization patterns are influenced by cellular properties (intrinsic frequency and the strength of spikes) and coupling parameters (speed of synapses and coupling strengths). We find that, when inhibitory synapses are fast and the electrotonic effect of the suprathreshold portion of the spike is large, increasing the strength of weak electrical coupling promotes synchrony. Conversely, when inhibitory synapses are slow and the electrotonic effect of the suprathreshold portion of the spike is small, increasing the strength of weak electrical coupling promotes antisynchrony (see Fig. 10). Furthermore, our results indicate that, given a fixed total coupling strength, either electrical coupling alone or inhibition alone is better at enhancing neural synchrony than a combination of electrical and inhibitory coupling. We also show that these results extend to moderate coupling strengths.
Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds
Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as super pixels, is a widely used preprocessing step in segmentation algorithms. Super pixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that super pixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent super pixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.
Large-vocabulary audio-visual speech recognition by machines and humans
We compare automatic recognition with human perception of audio-visual speech, in the large-vocabulary, continuous speech recognition (LVCSR) domain. Specifically, we study the benefit of the visual modality for both machines and humans, when combined with audio degraded by speech-babble noise at various signal-to-noise ratios (SNRs). We first consider an automatic speechreading system with a pixel based visual front end that uses feature fusion for bimodal integration, and we compare its performance with an audio-only LVCSR system. We then describe results of human speech perception experiments, where subjects are asked to transcribe audio-only and audiovisual utterances at various SNRs. For both machines and humans, we observe approximately a 6 dB effective SNR gain compared to the audio-only performance at 10 dB, however such gains significantly diverge at other SNRs. Furthermore, automatic audio-visual recognition outperforms human audioonly speech perception at low SNRs.
Secreted Frizzled-Related Protein 4 Inhibits Glioma Stem-Like Cells by Reversing Epithelial to Mesenchymal Transition, Inducing Apoptosis and Decreasing Cancer Stem Cell Properties
The Wnt pathway is integrally involved in regulating self-renewal, proliferation, and maintenance of cancer stem cells (CSCs). We explored the effect of the Wnt antagonist, secreted frizzled-related protein 4 (sFRP4), in modulating epithelial to mesenchymal transition (EMT) in CSCs from human glioblastoma cells lines, U87 and U373. sFRP4 chemo-sensitized CSC-enriched cells to the most commonly used anti-glioblastoma drug, temozolomide (TMZ), by the reversal of EMT. Cell movement, colony formation, and invasion in vitro were suppressed by sFRP4+TMZ treatment, which correlated with the switch of expression of markers from mesenchymal (Twist, Snail, N-cadherin) to epithelial (E-cadherin). sFRP4 treatment elicited activation of the Wnt-Ca2(+) pathway, which antagonizes the Wnt/ß-catenin pathway. Significantly, the chemo-sensitization effect of sFRP4 was correlated with the reduction in the expression of drug resistance markers ABCG2, ABCC2, and ABCC4. The efficacy of sFRP4+TMZ treatment was demonstrated in vivo using nude mice, which showed minimum tumor engraftment using CSCs pretreated with sFRP4+TMZ. These studies indicate that sFRP4 treatment would help to improve response to commonly used chemotherapeutics in gliomas by modulating EMT via the Wnt/ß-catenin pathway. These findings could be exploited for designing better targeted strategies to improve chemo-response and eventually eliminate glioblastoma CSCs.
Different binding mechanisms in biosorption of reactive dyes according to their reactivity.
Various binding mechanisms for the uptake of reactive dyes by the protonated waste biomass of Corynebacterium glutamicum were investigated. As model reactive dyes, Reactive Blue 4 (RB 4), Reactive Orange 16 (RO 16) and Reactive Yellow 2 (RY 2) were used in this study. The solution pH strongly influenced the sorption capacity and the binding mechanisms of reactive dyes by C. glutamicum. At acidic pH, the electrostatic interaction was found to be a major binding mechanism. The maximum uptakes of RY 2, RO 16 and RB 4 at pH 2 were estimated to be 155.0+/-14.1, 156.6+/-6.7 and 184.9+/-16.4mg/g, respectively. Under alkaline conditions, the binding mechanisms were quite different according to the reactivity of reactive dyes. It was found that chemical bonding existed between the biomass surface and dye molecules under basic pH conditions.
Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study
Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroundings. The latter are often referred to as “visual saliency.” Modeling bottom-up visual saliency has been the subject of numerous research efforts during the past 20 years, with many successful applications in computer vision and robotics. Available models have been tested with different datasets (e.g., synthetic psychological search arrays, natural images or videos) using different evaluation scores (e.g., search slopes, comparison to human eye tracking) and parameter settings. This has made direct comparison of models difficult. Here, we perform an exhaustive comparison of 35 state-of-the-art saliency models over 54 challenging synthetic patterns, three natural image datasets, and two video datasets, using three evaluation scores. We find that although model rankings vary, some models consistently perform better. Analysis of datasets reveals that existing datasets are highly center-biased, which influences some of the evaluation scores. Computational complexity analysis shows that some models are very fast, yet yield competitive eye movement prediction accuracy. Different models often have common easy/difficult stimuli. Furthermore, several concerns in visual saliency modeling, eye movement datasets, and evaluation scores are discussed and insights for future work are provided. Our study allows one to assess the state-of-the-art, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains.
Feature-Rich Networks for Knowledge Base Completion
We propose jointly modelling Knowledge Bases and aligned text with Feature-Rich Networks. Our models perform Knowledge Base Completion by learning to represent and compose diverse feature types from partially aligned and noisy resources. We perform experiments on Freebase utilizing additional entity type information and syntactic textual relations. Our evaluation suggests that the proposed models can better incorporate side information than previously proposed combinations of bilinear models with convolutional neural networks, showing large improvements when scoring the plausibility of unobserved facts with associated textual mentions.
Diagnostic criteria for malingered neurocognitive dysfunction: proposed standards for clinical practice and research.
Over the past 10 years, widespread and concerted research efforts have led to increasingly sophisticated and efficient methods and instruments for detecting exaggeration or fabrication of cognitive dysfunction. Despite these psychometric advances, the process of diagnosing malingering remains difficult and largely idiosyncratic. This article presents a proposed set of diagnostic criteria that define psychometric, behavioral, and collateral data indicative of possible, probable, and definite malingering of cognitive dysfunction, for use in clinical practice and for defining populations for clinical research. Relevant literature is reviewed, and limitations and benefits of the proposed criteria are discussed.
COLOR IMAGE SEGMENTATION BASED ON MEAN SHIFT AND NORMALIZED CUTS
DOI: 10.5281/zenodo.48929 ABSTRACT An approach for Image segmentation is proposed based on mean shift algorithm and normalized cuts algorithm and its application’s implementation is proposed. The normalized cuts algorithm gives good accuracy and better segmentation compared to all most of the existing methods. By using Mean Shift algorithm on the original image to partition it into sub graphs we can create image matrices with lower dimensions. The proposed algorithm first applied Mean Shift algorithm to obtain sub graphs and then applied Normalized cut. Currency denomination and detection is an application of image segmentation. It is very difficult to count different denomination notes in a bunch. This paper propose a image segmentation technique to extract paper currency denomination. The extracted ROI can be used with Pattern Recognition and Neural Networks matching technique. First we acquire the image by simple flat scanner on fix dpi with a particular size, the pixels level is set to obtain image. Some filters and segmentation algorithms are applied to extract denomination value of note. We use different pixel levels in different denomination notes. The Pattern Recognition and Neural Networks matcher technique is used to match or find currency value/denomination of paper currency. After matching the pattern the result is converted to an audio file which helps in recognition of the given Indian currency.
North American Water and Environment Congress & Destructive Water
This proceedings contains papers presented at the North American Water and Environment Congress ’96 of the ASCE’s Environmental Engineering Division, Water Resources Engineering Division and Water Resources Planning and Management Division, held in Anaheim, California, June 22-28, 1996. Also included in this proceedings are the abstracts of the papers presented at the Destructive Water Conference of the IAHS and ASCE held concurrently at the same location. The purpose of the North American Water and Environment Congress ’96 was to provide a forum of discussion and exchange of information on a broad spectrum of areas in water resources and environmental engineering, including planning, design, research and management. One major focus was on the international issues relating to the North American Free Trade Agreement (NAFTA). In addition, the Destructive Water Conference addressed a broad spectrum of issues related to disaster abatement and control for all types of flooding and accidental pollution. The papers and abstracts in this proceedings explore potential solutions to critical water resources and environmental problems. The topics covered include, but are not limited to: 1) Hydrologic and hydraulic analyses; 2) irrigation and drainage issues; 3) environmental impacts and mitigation; 4) surface water and groundwater studies; 5) flood damage and risk assessment; 6) bridge scour evaluation; 7) wetlands studies; 8) water and wastewater treatment; 9) geoenvironmental topics; 10) geographic information systems (GIS); 11) physical and mathematical modeling; 12) computer applications; 13) wildlife and aquatic habitat management; 14) U.S.-Mexico border issues; 15) U.S.-Canada water and environmental issues; 16) NAFTA related matters; and 17) international water and environmental problems and solutions.
Internet of Things based Expert System for Smart Agriculture
Agriculture sector is evolving with the advent of the information and communication technology. Efforts are being made to enhance the productivity and reduce losses by using the state of the art technology and equipment. As most of the farmers are unaware of the technology and latest practices, many expert systems have been developed in the world to facilitate the farmers. However, these expert systems rely on the stored knowledge base. We propose an expert system based on the Internet of Things (IoT) that will use the input data collected in real time. It will help to take proactive and preventive actions to minimize the losses due to diseases and insects/pests. Keywords—Internet of Things; Smart Agriculture; Cotton; Plant Diseases; Wireless Sensor Network
SCNet: A simplified encoder-decoder CNN for semantic segmentation
We present a simplified and novel fully convolutional neural network (CNN) architecture for semantic pixel-wise segmentation named as SCNet. Different from current CNN pipelines, proposed network uses only convolution layers with no pooling layer. The key objective of this model is to offer a more simplified CNN model with equal benchmark performance and results. It is an encoder-decoder based fully convolution network model. Encoder network is based on VGG 16-layer while decoder networks use upsampling and deconvolution units followed by a pixel-wise classification layer. The proposed network is simple and offers reduced search space for segmentation by using low-resolution encoder feature maps. It also offers a great deal of reduction in trainable parameters due to reusing encoder layer's sparse features maps. The proposed model offers outstanding performance and enhanced results in terms of architectural simplicity, number of trainable parameters and computational time.
A novel automatic white balance method for digital still cameras
Automatic white balance is an important function of digital still cameras. The goal of white balance is to adjust the image such that it looks as if it is taken under canonical light. We proposed a novel technique to detect reference white points in an image. Our algorithm uses dynamic threshold for white point detection and is more flexible than other existing ad hoc algorithms. We have tested the algorithm on 50 images taken under various light sources. The results show that the algorithm is superior or comparable to other methods in both objective and subjective evaluations. The complexity of the algorithm is quite low, which makes it attractive for real-world applications.
Extract Me If You Can: Abusing PDF Parsers in Malware Detectors
Owing to the popularity of the PDF format and the continued exploitation of Adobe Reader, the detection of malicious PDFs remains a concern. All existing detection techniques rely on the PDF parser to a certain extent, while the complexity of the PDF format leaves an abundant space for parser confusion. To quantify the difference between these parsers and Adobe Reader, we create a reference JavaScript extractor by directly tapping into Adobe Reader at locations identified through a mostly automatic binary analysis technique. By comparing the output of this reference extractor against that of several opensource JavaScript extractors on a large data set obtained from VirusTotal, we are able to identify hundreds of samples which existing extractors fail to extract JavaScript from. By analyzing these samples we are able to identify several weaknesses in each of these extractors. Based on these lessons, we apply several obfuscations on a malicious PDF sample, which can successfully evade all the malware detectors tested. We call this evasion technique a PDF parser confusion attack. Lastly, we demonstrate that the reference JavaScript extractor improves the accuracy of existing JavaScript-based classifiers and how it can be used to mitigate these parser limitations in a real-world setting.
Plantar fasciitis and the windlass mechanism: a biomechanical link to clinical practice.
OBJECTIVE Plantar fasciitis is a prevalent problem, with limited consensus among clinicians regarding the most effective treatment. The purpose of this literature review is to provide a systematic approach to the treatment of plantar fasciitis based on the windlass mechanism model. DATA SOURCES We searched MEDLINE, SPORT Discus, and CINAHL from 1966 to 2003 using the key words plantar fasciitis, windlass mechanism, pronation, heel pain, and heel spur. DATA SYNTHESIS We offer a biomechanical application for the evaluation and treatment of plantar fasciitis based on a review of the literature for the windlass mechanism model. This model provides a means for describing plantar fasciitis conditions such that clinicians can formulate a potential causal relationship between the conditions and their treatments. CONCLUSIONS/RECOMMENDATIONS Clinicians' understanding of the biomechanical causes of plantar fasciitis should guide the decision-making process concerning the evaluation and treatment of heel pain. Use of this approach may improve clinical outcomes because intervention does not merely treat physical symptoms but actively addresses the influences that resulted in the condition. Principles from this approach might also provide a basis for future research investigating the efficacy of plantar fascia treatment.
Miniaturized loaded crossed dipole antenna with omnidirectional radiation pattern in the horizontal plane
In this paper, a new design of a loaded cross dipole antennas (LCDA) with an omni-directional radiation pattern in the horizontal plane and broad-band characteristics is investigated. An efficient optimization procedure based on a genetic algorithm is employed to design the LCDA and to determine the parameters working over a 25:1 bandwidth. The simulation results are compared with measurements.
Corpus Based Classification of Text in Australian Contracts
Written contracts are a fundamental framework for commercial and cooperative transactions and relationships. Limited research has been published on the application of machine learning and natural language processing (NLP) to contracts. In this paper we report the classification of components of contract texts using machine learning and hand-coded methods. Authors studying a range of domains have found that combining machine learning and rule based approaches increases accuracy of machine learning. We find similar results which suggest the utility of considering leveraging hand coded classification rules for machine learning. We attained an average accuracy of 83.48% on a multiclass labelling task on 20 contracts combining machine learning and rule based approaches, increasing performance over machine learning alone.
Real Time Pattern Recognition and Identification of Co-ordinates with LabVIEW
Digital Image processing( DIP ) is a theme of awesome significance basically for any task, either for essential varieties of photograph indicators or complex mechanical frameworks utilizing assumed vision. In this paperbasics of the image processing in LabVIEW have been described in brief. It involves capturing the image of an object that is to be analysed and compares it with the reference image template of the object by pattern matching algorithm. The co-ordinates of the image is also be identified by tracking of object on the screen. A basic pattern matching algorithm is modified to snap and track the image on real-time basis. Keywords— LabVIEW, IMAQ, Pattern matching, Realtime tracking, .
Blockage and directivity in 60 GHz wireless personal area networks: from cross-layer model to multihop MAC design
We present a cross-layer modeling and design approach for multigigabit indoor wireless personal area networks (WPANs) utilizing the unlicensed millimeter (mm) wave spectrum in the 60 GHz band. Our approach accounts for the following two characteristics that sharply distinguish mm wave networking from that at lower carrier frequencies. First, mm wave links are inherently directional: directivity is required to overcome the higher path loss at smaller wavelengths, and it is feasible with compact, low-cost circuit board antenna arrays. Second, indoor mm wave links are highly susceptible to blockage because of the limited ability to diffract around obstacles such as the human body and furniture. We develop a diffraction-based model to determine network link connectivity as a function of the locations of stationary and moving obstacles. For a centralized WPAN controlled by an access point, it is shown that multihop communication, with the introduction of a small number of relay nodes, is effective in maintaining network connectivity in scenarios where single-hop communication would suffer unacceptable outages. The proposed multihop MAC protocol accounts for the fact that every link in the WPAN is highly directional, and is shown, using packet level simulations, to maintain high network utilization with low overhead.
Historiemåleri och bilder av vardag : Tatjana Nazarenkos konstnärskap i 1970-talets Sovjet
This thesis focuses on the Soviet painter Tatyana Nazarenko and her position as an influential artist in the Soviet Union of the 1970’s, a decade when Nazarenko depicted everyday life and events from Russian history. The main purpose of this thesis is to shed light upon the importance of especially these motifs in their historical, political and aesthetic context. In this way, the thesis is a study of the artist’s work in a contextual perspective. In order to understand the general conditions for Soviet Art in the 1970´s, and Tatyana Nazarenko´s picture world, the official State-regulated Art is given attention, as Socialist Realism came to be the normative frame of reference for artistic life and the individual artists. Artistic life in the Soviet Union was well regulated and official Art dominated from the early 1930´s until the collapse of the Soviet Union, while especially during the 1970´s, became subjected to so-called unofficial Art, also known as underground. The unofficial, as well as the official Art, became important for Tatyana Nazarenko, for, while not belonging to either side, she came to have a constant relationship with them. She belonged, however, to a small art circle, balancing between the official direction and criticism of the system, later known as permitted. This thesis clarifies the relationship between what is defined as official, permitted and unofficial Art. In exposing her aesthetic strategies, it is shown in what way she deviates from the official and how far the artist could extend the permitted limitations. On the question of her aesthetic strategies a further question arises of how an awareness of history is expressed in motif and form. Finally, the importance of women’s experience in her picture world is discussed. Picture material in this thesis is composed partly of some thirty works from Tatyana Nazarenko´s own production and partly of works with a contextualized and comparative function, encompassing some fifty works taken from Soviet Art History. A few examples from Western Art History constitute further comparative material. The motifs consist mainly of depictions of historical events and pictures of everyday life i.e. genre pictures.
Location Fingerprinting With Bluetooth Low Energy Beacons
The complexity of indoor radio propagation has resulted in location-awareness being derived from empirical fingerprinting techniques, where positioning is performed via a previously-constructed radio map, usually of WiFi signals. The recent introduction of the Bluetooth Low Energy (BLE) radio protocol provides new opportunities for indoor location. It supports portable battery-powered beacons that can be easily distributed at low cost, giving it distinct advantages over WiFi. However, its differing use of the radio band brings new challenges too. In this work, we provide a detailed study of BLE fingerprinting using 19 beacons distributed around a ~600 m2 testbed to position a consumer device. We demonstrate the high susceptibility of BLE to fast fading, show how to mitigate this, and quantify the true power cost of continuous BLE scanning. We further investigate the choice of key parameters in a BLE positioning system, including beacon density, transmit power, and transmit frequency. We also provide quantitative comparison with WiFi fingerprinting. Our results show advantages to the use of BLE beacons for positioning. For one-shot (push-to-fix) positioning we achieve <; 2.6 m error 95% of the time for a dense BLE network (1 beacon per 30 m2), compared to <; 4.8 m for a reduced density (1 beacon per 100 m2) and <; 8.5 m for an established WiFi network in the same area.
How to Build Templates for RDF Question/Answering: An Uncertain Graph Similarity Join Approach
A challenging task in the natural language question answering (Q/A for short) over RDF knowledge graph is how to bridge the gap between unstructured natural language questions (NLQ) and graph-structured RDF data (GOne of the effective tools is the "template", which is often used in many existing RDF Q/A systems. However, few of them study how to generate templates automatically. To the best of our knowledge, we are the first to propose a join approach for template generation. Given a workload D of SPARQL queries and a set N of natural language questions, the goal is to find some pairs q, n, for q∈ D ∧ n ∈, N, where SPARQL query q is the best match for natural language question n. These pairs provide promising hints for automatic template generation. Due to the ambiguity of the natural languages, we model the problem above as an uncertain graph join task. We propose several structural and probability pruning techniques to speed up joining. Extensive experiments over real RDF Q/A benchmark datasets confirm both the effectiveness and efficiency of our approach.
Power quality analysis of Residual Current Device [RCD] nuisance tripping at commercial buildings
This paper concentrates on the case study on nuisance tripping of Residual Current Device (RCD) and equipment damage due to high neutral to ground voltage. The understanding of power quality elements is highlighted. The events of neutral to ground voltage disturbance are analysed by using CBEMA Curve. The events of neutral to ground voltage which occur outside the CBEMA Curve will easily contribute the above power quality problems. By implementing a proper grounding arrangement system based on TT, TN-S and TN-C-S systems and applying a separately derived system, the problems can be mitigated. With adequate wiring and proper grounding installation and by adding a separately derived source near to the load will eliminate the high neutral to ground voltage events.
Drawbridge: software-defined DDoS-resistant traffic engineering
End hosts in today's Internet have the best knowledge of the type of traffic they should receive, but they play no active role in traffic engineering. Traffic engineering is conducted by ISPs, which unfortunately are blind to specific user needs. End hosts are therefore subject to unwanted traffic, particularly from Distributed Denial of Service (DDoS) attacks. This research proposes a new system called DrawBridge to address this traffic engineering dilemma. By realizing the potential of software-defined networking (SDN), in this research we investigate a solution that enables end hosts to use their knowledge of desired traffic to improve traffic engineering during DDoS attacks.
A regression model for predicting optimal purchase timing for airline tickets
Optimal timing for airline ticket purchasing from the consumer’s perspective is challenging principally because buyers have insufficient information for reasoning about future price movements. This paper presents a model for computing expected future prices and reasoning about the risk of price changes. The proposed model is used to predict the future expected minimum price of all available flights on specific routes and dates based on a corpus of historical price quotes. Also, we apply our model to predict prices of flights with specific desirable properties such as flights from a specific airline, non-stop only flights, or multi-segment flights. By comparing models with different target properties, buyers can determine the likely cost of their preferences. We present the expected costs of various preferences for two high-volume routes. Performance of the prediction models presented is achieved by including instances of time-delayed features, by imposing a class hierarchy among the raw features based on feature similarity, and by pruning the classes of features used in prediction based on in-situ performance. Our results show that purchase policy guidance using these models can lower the average cost of purchases in the 2 month period prior to a desired departure. The proposed method compares favorably with a deployed commercial web site providing similar purchase policy recommendations.
Measuring Engagement at Work: Validation of the Chinese Version of the Utrecht Work Engagement Scale
BACKGROUND Work engagement is a positive work-related state of fulfillment characterized by vigor, dedication, and absorption. Previous studies have operationalized the construct through development of the Utrecht Work Engagement Scale. Apart from the original three-factor 17-item version of the instrument (UWES-17), there exists a nine-item shortened revised version (UWES-9). PURPOSE The current study explored the psychometric properties of the Chinese version of the Utrecht Work Engagement Scale in terms of factorial validity, scale reliability, descriptive statistics, and construct validity. METHOD A cross-sectional questionnaire survey was conducted in 2009 among 992 workers from over 30 elderly service units in Hong Kong. RESULTS Confirmatory factor analyses revealed a better fit for the three-factor model of the UWES-9 than the UWES-17 and the one-factor model of the UWES-9. The three factors showed acceptable internal consistency and strong correlations with factors in the original versions. Engagement was negatively associated with perceived stress and burnout while positively with age and holistic care climate. CONCLUSION The UWES-9 demonstrates adequate psychometric properties, supporting its use in future research in the Chinese context.
A Computational Account of Syntactic, Semantic and Discourse Principles for Anaphora Resolution
We present a unified framework for the computational implementation of syntactic, semantic, pragmatic and even "stylistic" constraints on anaphora. We build on our BUILDRS implementation of Discourse Representation (DR) Theory and Lexical Functional Grammar (LFG) discussed in Wada & Asher (1986). We develop and argue for a semantically based processing model for anaphora resolution that exploits a number of desirable features: (1) the partial semantics provided by the discourse representation structures (DRSs) of DR theory, (2) the use of syntactic and lexical features to filter out unacceptable potential anaphoric antecedents from the set of logically possible antecedents determined by the logical structure of the DRS, (3) the use of pragmatic or discourse constraints, noted by those working on focus, to impose a salience ordering on the set of grammatically acceptable potential antecedents. Only where there is a marked difference in the degree of salience among the possible antecedents does the salience ranking allow us to make predictions on preferred readings. In cases where the difference is extreme, we predict the discourse to be infelicitous if, because of other constraints, one of the markedly less salient antecedents must be linked with the pronoun. We also briefly consider the applications of our processing model to other definite noun phrases besides anaphoric pronouns.
Does multitasking with mobile phones affect learning? A review
Mobile phone multitasking is widely considered to be a major source of distraction in academic performance. This paper attempts to review the emerging literature by focusing on three questions concerning the influence of mobile phone multitasking on academic performance: (a) How does mobile phone multitasking impair learning? (b) Why does mobile phone use impair learning? (c) How to prevent from mobile phone distraction? We use multiple strategies to locate the existing research literature and identified 132 studies published during 1999e2014. The mobile phone multitasking and distractibility are reviewed in three major aspects: distraction sources (ring of mobile phone, texting, and social application), distraction targets (reading and attending), and distraction subjects (personality, gender, and culture). We also compare the results of these studies with the findings on mobile phone multitasking and driving, the earliest area of mobile phone multitasking research. Both limitations of existing research and future research directions are discussed. © 2015 Elsevier Ltd Elsevier Ltd. All rights reserved.
Model Predictive Control in Power Electronics: A Hybrid Systems Approach
The field of power electronics poses challenging control problems that cannot be treated in a complete manner using traditional modelling and controller design approaches. The main difficulty arises from the hybrid nature of these systems due to the presence of semiconductor switches that induce different modes of operation and operate with a high switching frequency. Since the control techniques traditionally employed in industry feature a significant potential for improving the performance and the controller design, the field of power electronics invites the application of advanced hybrid systems methodologies. The computational power available today and the recent theoretical advances in the control of hybrid systems allow one to tackle these problems in a novel way that improves the performance of the system, and is systematic and implementable. In this paper, this is illustrated by two examples, namely the Direct Torque Control of three-phase induction motors and the optimal control of switch-mode dc-dc converters.
Surface Elevation Change and Susceptibility of Different Mangrove Zones to Sea-Level Rise on Pacific High Islands of Micronesia
Mangroves on Pacific high islands offer a number of important ecosystem services to both natural ecological communities and human societies. High islands are subjected to constant erosion over geologic time, which establishes an important source of terrigeneous sediment for nearby marine communities. Many of these sediments are deposited in mangrove forests and offer mangroves a potentially important means for adjusting surface elevation with rising sea level. In this study, we investigated sedimentation and elevation dynamics of mangrove forests in three hydrogeomorphic settings on the islands of Kosrae and Pohnpei, Federated States of Micronesia (FSM). Surface accretion rates ranged from 2.9 to 20.8 mm y−1, and are high for naturally occurring mangroves. Although mangrove forests in Micronesian high islands appear to have a strong capacity to offset elevation losses by way of sedimentation, elevation change over 6½ years ranged from −3.2 to 4.1 mm y−1, depending on the location. Mangrove surface elevation change also varied by hydrogeomorphic setting and river, and suggested differential, and not uniformly bleak, susceptibilities among Pacific high island mangroves to sea-level rise. Fringe, riverine, and interior settings registered elevation changes of −1.30, 0.46, and 1.56 mm y−1, respectively, with the greatest elevation deficit (−3.2 mm y−1) from a fringe zone on Pohnpei and the highest rate of elevation gain (4.1 mm y−1) from an interior zone on Kosrae. Relative to sea-level rise estimates for FSM (0.8–1.8 mm y−1) and assuming a consistent linear trend in these estimates, soil elevations in mangroves on Kosrae and Pohnpei are experiencing between an annual deficit of 4.95 mm and an annual surplus of 3.28 mm. Although natural disturbances are important in mediating elevation gain in some situations, constant allochthonous sediment deposition probably matters most on these Pacific high islands, and is especially helpful in certain hydrogeomorphic zones. Fringe mangrove forests are most susceptible to sea-level rise, such that protection of these outer zones from anthropogenic disturbances (for example, harvesting) may slow the rate at which these zones convert to open water.
"One of the greatest medical success stories:" Physicians and nurses' small stories about vaccine knowledge and anxieties.
In recent years, the Canadian province of Alberta experienced outbreaks of measles, mumps, pertussis, and influenza. Even so, the dominant cultural narrative maintains that vaccines are safe, effective, and necessary to maintain population health. Many vaccine supporters have expressed anxieties that stories contradicting this narrative have lowered herd immunity levels because they frighten the public into avoiding vaccination. As such, vaccine policies often emphasize educating parents and the public about the importance and safety of vaccination. These policies rely on health professionals to encourage vaccine uptake and assume that all professionals support vaccination. Health professionals, however, are socially positioned between vaccine experts (such as immunologists) and non-experts (the wider public). In this article, I discuss health professionals' anxieties about the potential risks associated with vaccination and with the limitations of Alberta's immunisation program. Specifically, I address the question: If medical knowledge overwhelmingly supports vaccination, then why do some professionals continue to question certain vaccines? To investigate this topic, I interviewed twenty-seven physicians and seven nurses. With stock images and small stories that interviewees shared about their vaccine anxieties, I challenge the common assumption that all health professionals support vaccines uncritically. All interviewees provided generic statements that supported vaccination and Alberta's immunisation program, but they expressed anxieties when I asked for details. I found that their anxieties reflected nuances that the culturally dominant vaccine narrative overlooks. Particularly, they critiqued the influence that pharmaceutical companies, the perceived newness of specific vaccines, and the limitations of medical knowledge and vaccine schedules.
A decision support system for determining the optimal size of a new expressway service area : Focused on the pro fi tability
a r t i c l e i n f o Since the early 1990s, South Korea has been expanding its expressways. As of July 2013, a total of 173 expressway service areas (ESAs) have been established. Among these, 31 ESAs were closed due to financial deficits. To address this challenge, this study aimed to develop a decision support system for determining the optimal size of a new ESA, focusing on the profitability of the ESA. This study adopted a case-based reasoning approach as the main research method because it is necessary to provide the historical data as a reference in determining the optimal size of a new ESA, which is more suitable for the decision-making process from the practical perspective. This study used a total of 106 general ESAs to develop the proposed system. Compared to the conventional process (i.e., direction estimation), the prediction accuracy of the improved process (i.e., three-phase estimation process) was improved by 9.84%. The computational time required for the optimization of the proposed system was determined to be less than 10 min (from 1.75 min to 9.93 min). The proposed system could be useful for the final decision-maker as the following purposes: (i) the probability estimation model for determining the optimal size of a new ESA during the planning stage; (ii) the approximate initial construction cost estimation model for a new ESA by using the estimated sales in the ESA; and (iii) the comparative assessment model for evaluating the sales per the building area of the existing ESA.
Automatic Liver Segmentation Using an Adversarial Image-to-Image Network
Automatic liver segmentation in 3D medical images is essential in many clinical applications, such as pathological diagnosis of hepatic diseases, surgical planning, and postoperative assessment. However, it is still a very challenging task due to the complex background, fuzzy boundary, and various appearance of liver. In this paper, we propose an automatic and efficient algorithm to segment liver from 3D CT volumes. A deep image-to-image network (DI2IN) is first deployed to generate the liver segmentation, employing a convolutional encoder-decoder architecture combined with multi-level feature concatenation and deep supervision. Then an adversarial network is utilized during training process to discriminate the output of DI2IN from ground truth, which further boosts the performance of DI2IN. The proposed method is trained on an annotated dataset of 1000 CT volumes with various different scanning protocols (e.g., contrast and non-contrast, various resolution and position) and large variations in populations (e.g., ages and pathology). Our approach outperforms the state-of-the-art solutions in terms of segmentation accuracy and computing efficiency.
Fear of being laughed at and social anxiety : A preliminary psychometric study
The present study examines the relationship between questionnaire measures of social phobia and gelotophobia. A sample of 211 Colombian adults filled in Spanish versions of the Social Anxiety and Distress scale (SAD; Watson & Friend, 1969), the Fear of Negative Evaluation scale (FNE; Watson & Friend, 1969) and the GELOPH<15> (Ruch & Proyer, 2008). Results confirmed that both Social Anxiety and Distress and Fear of Negative Evaluation scale overlapped with the fear of being laughed at without being identical with it. The SAD and FNE correlated highly with the GELOPH<15> but not all high scorers in these scales expressed a fear of being laughed at. Furthermore, an item factor analysis yielded three factors that were mostly loaded by items of the respective scales. This three-factor structure was verified using confirmatory factor analysis. A measure model where one general factor of social anxiety was specified, or another one where two different factors were defined (gelotophobia vs. social anxiety assessed by SAD and FNE) showed a very poor fit to the data. It is concluded that the fear of being laughed cannot fully be accounted for by these measures of social phobia.
Avaliação de desempenho de serviços de saúde
AZEVEDO, A.C. de. Avaliação de desempenho de serviços de saúde. Rev. Saúde públ., S. Paulo, 25: 64-71, 1991. A partir da literatura recente (até 1988) a respeito da avaliação de serviços de saúde em geral e do desempenho hospitalar em particular, destacam-se os diferentes aspectos conceituais e metodológicos envolvidos, começando pelas primeiras tentativas no seio do Colégio Americano de Cirurgiões, passando pela criação e evolução da Comissão Conjunta de Acreditação de Hospitais americana, até os esforços e elaborações conceituais e metodológicas mais recentes. São destacados a metodologia dos grupos diagnósticos homogêneos ("diagnosis related groups" ou "DRGs") e os indicadores de gravidade ("severity of illness"). É comentada a evolução desse incipiente campo de conhecimento e de prática no ambiente nacional. São comentadas as origens do recente interesse internacional a respeito do problema, ou seja, o aumento generalizado de custos dos serviços de saúde, crescente aumento de demandas judiciais em alguns países e ainda o acentuado incremento de complexidade dos atos em muitas especialidades. Destacam-se as fontes de informação correntemente utilizadas no processo, ou seja, a observação direta (estudos caso/controle), os prontuários médicos e os instrumentos-resumo, freqüentemente utilizados para remuneração do atendimento. Mencionam-se as profundas influências na prática de saúde que o processo de avaliação tem introduzido, particularmente a padronização de procedimentos, o estadiamento de agravos, os estudos de trajetória, os relacionados a situações traçadoras ("tracers") e a alternativa que mais tem influenciado a prática de situações complexas de saúde, que são os protocolos diagnóstico-terapêutico s já amplamente utilizados em algumas áreas como a do tratamento de câncer, inclusive no Brasil.
Seven principles of goal activation: a systematic approach to distinguishing goal priming from priming of non-goal constructs.
Countless studies have recently purported to demonstrate effects of goal priming; however, it is difficult to muster unambiguous support for the claims of these studies because of the lack of clear criteria for determining whether goals, as opposed to alternative varieties of mental representations, have indeed been activated. Therefore, the authors offer theoretical guidelines that may help distinguish between semantic, procedural, and goal priming. Seven principles that are hallmarks of self-regulatory processes are proposed: Goal-priming effects (a) involve value, (b) involve postattainment decrements in motivation, (c) involve gradients as a function of distance to the goal, (d) are proportional to the product of expectancy and value, (e) involve inhibition of conflicting goals, (f) involve self-control, and (g) are moderated by equifinality and multifinality. How these principles might help distinguish between automatic activation of goals and priming effects that do not involve goals is discussed.
Action Recognition with Joint Attention on Multi-Level Deep Features
We propose a novel deep supervised neural network for the task of action recognition in videos, which implicitly takes advantage of visual tracking and shares the robustness of both deep Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). In our method, a multi-branch model is proposed to suppress noise from background jitters. Specifically, we firstly extract multi-level deep features from deep CNNs and feed them into 3dconvolutional network. After that we feed those feature cubes into our novel joint LSTM module to predict labels and to generate attention regularization. We evaluate our model on two challenging datasets: UCF101 and HMDB51. The results show that our model achieves the state-of-art by only using convolutional features.
Social Media Image Analysis for Public Health
Several projects have shown the feasibility to use emph{textual} social media data to track public health concerns, such as temporal influenza patterns or geographical obesity patterns. In this paper, we look at whether geo-tagged emph{images} from Instagram also provide a viable data source. Especially for "lifestyle" diseases, such as obesity, drinking or smoking, images of social gatherings could provide information that is not necessarily shared in, say, tweets. In this study, we explore whether (i) tags provided by the users and (ii) annotations obtained via automatic image tagging are indeed valuable for studying public health. We find that both user-provided and machine-generated tags provide information that can be used to infer a county's health statistics. Whereas for most statistics user-provided tags are better features, for predicting excessive drinking machine-generated tags such as "liquid' and "glass' yield better models. This hints at the potential of using machine-generated tags to study substance abuse.
Email Spam Filter using Bayesian Neural Networks
Nowadays, e-mail is widely becoming one of the fastest and most economical forms of communication but they are prone to be misused. One such misuse is the posting of unsolicited, unwanted e-mails known as spam or junk e-mails. This paper presents and discusses an implementation of a spam filtering system. The idea is to use a neural network which will be trained to recognize different forms of often used words in spam mails. The Bayesian ANN is trained with finite sample sizes to approximate the ideal observer. This strategy can provide improved filtering of Spam than existing Static Spam filters.