title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Driving an autonomous car with eye tracking | This paper describes eyeDriver, a hardware and software setup to drive an autonomous car with eye movement. The movement of the operator’s iris is tracked with an infrared sensitive camera built onto a HED4 interface by SMI. The position of the iris is then propagated by eyeDriver to control the steering wheel of “Spirit of Berlin”, a completely autonomous car developed by the Free University of Berlin which is capable of unmanned driving in urban areas. |
Can the pre-operative Western Ontario and McMaster score predict patient satisfaction following total hip arthroplasty? | In this study we evaluated whether pre-operative Western Ontario and McMaster Universities (WOMAC) osteoarthritis scores can predict satisfaction following total hip arthroplasty (THA). Prospective data for a cohort of patients undergoing THA from two large academic centres were collected, and pre-operative and one-year post-operative WOMAC scores and a 25-point satisfaction questionnaire were obtained for 446 patients. Satisfaction scores were dichotomised into either improvement or deterioration. Scatter plots and Spearman's rank correlation coefficient were used to describe the association between pre-operative WOMAC and one-year post-operative WOMAC scores and patient satisfaction. Satisfaction was compared using receiver operating characteristic (ROC) analysis against pre-operative, post-operative and δ WOMAC scores. We found no relationship between pre-operative WOMAC scores and one-year post-operative WOMAC or satisfaction scores, with Spearman's rank correlation coefficients of 0.16 and -0.05, respectively. The ROC analysis showed areas under the curve (AUC) of 0.54 (pre-operative WOMAC), 0.67 (post-operative WOMAC) and 0.43 (δ WOMAC), respectively, for an improvement in satisfaction. We conclude that the pre-operative WOMAC score does not predict the post-operative WOMAC score or patient satisfaction after THA, and that WOMAC scores can therefore not be used to prioritise patient care. |
Quantitative Comparison of Flux-Switching Permanent-Magnet Motors With Interior Permanent Magnet Motor for EV, HEV, and PHEV Applications | Permanent-magnet (PM) motors with both magnets and armature windings on the stator (stator PM motors) have attracted considerable attention due to their simple structure, robust configuration, high power density, easy heat dissipation, and suitability for high-speed operations. However, current PM motors in industrial, residential, and automotive applications are still dominated by interior permanent-magnet motors (IPM) because the claimed advantages of stator PM motors have not been fully investigated and validated. Hence, this paper will perform a comparative study between a stator-PM motor, namely, a flux switching PM motor (FSPM), and an IPM which has been used in the 2004 Prius hybrid electric vehicle (HEV). For a fair comparison, the two motors are designed at the same phase current, current density, and dimensions including the stator outer diameter and stack length. First, the Prius-IPM is investigated by means of finite-element method (FEM). The FEM results are then verified by experimental results to confirm the validity of the methods used in this study. Second, the FSPM design is optimized and investigated based on the same method used for the Prius-IPM. Third, the electromagnetic performance and the material mass of the two motors are compared. It is concluded that FSPM has more sinusoidal back-EMF hence is more suitable for BLAC control. It also offers the advantage of smaller torque ripple and better mechanical integrity for safer and smoother operations. But the FSPM has disadvantages such as low magnet utilization ratio and high cost. It may not be able to compete with IPM in automotive and other applications where cost constraints are tight. |
CRISPR-Cas9 Knockin Mice for Genome Editing and Cancer Modeling | CRISPR-Cas9 is a versatile genome editing technology for studying the functions of genetic elements. To broadly enable the application of Cas9 in vivo, we established a Cre-dependent Cas9 knockin mouse. We demonstrated in vivo as well as ex vivo genome editing using adeno-associated virus (AAV)-, lentivirus-, or particle-mediated delivery of guide RNA in neurons, immune cells, and endothelial cells. Using these mice, we simultaneously modeled the dynamics of KRAS, p53, and LKB1, the top three significantly mutated genes in lung adenocarcinoma. Delivery of a single AAV vector in the lung generated loss-of-function mutations in p53 and Lkb1, as well as homology-directed repair-mediated Kras(G12D) mutations, leading to macroscopic tumors of adenocarcinoma pathology. Together, these results suggest that Cas9 mice empower a wide range of biological and disease modeling applications. |
Higher-order Coreference Resolution with Coarse-to-fine Inference | We introduce a fully differentiable approximation to higher-order inference for coreference resolution. Our approach uses the antecedent distribution from a span-ranking architecture as an attention mechanism to iteratively refine span representations. This enables the model to softly consider multiple hops in the predicted clusters. To alleviate the computational cost of this iterative process, we introduce a coarse-to-fine approach that incorporates a less accurate but more efficient bilinear factor, enabling more aggressive pruning without hurting accuracy. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the English OntoNotes benchmark, while being far more computationally efficient. |
An Overview of Predictors for Intrinsically Disordered Proteins over 2010–2014 | The sequence-structure-function paradigm of proteins has been changed by the occurrence of intrinsically disordered proteins (IDPs). Benefiting from the structural disorder, IDPs are of particular importance in biological processes like regulation and signaling. IDPs are associated with human diseases, including cancer, cardiovascular disease, neurodegenerative diseases, amyloidoses, and several other maladies. IDPs attract a high level of interest and a substantial effort has been made to develop experimental and computational methods. So far, more than 70 prediction tools have been developed since 1997, within which 17 predictors were created in the last five years. Here, we presented an overview of IDPs predictors developed during 2010-2014. We analyzed the algorithms used for IDPs prediction by these tools and we also discussed the basic concept of various prediction methods for IDPs. The comparison of prediction performance among these tools is discussed as well. |
Security Failures in EMV Smart Card Payment Systems | New credit cards containing Europay, MasterCard and Visa (EMV) chips for enhanced security used in-store purchases rather than online purchases have been adopted considerably. EMV supposedly protects the payment cards in such a way that the computer chip in a card referred to as chip-and-pin cards generate a unique one time code each time the card is used. The one time code is designed such that if it is copied or stolen from the merchant system or from the system terminal cannot be used to create a counterfeit copy of that card or counterfeit chip of the transaction. However, in spite of this design, EMV technology is not entirely foolproof from failure. In this paper we discuss the issues, failures and fraudulent cases associated with EMV Chip-And-Card technology. |
Write Fast, Read in the Past: Causal Consistency for Client-Side Applications | Client-side apps (e.g., mobile or in-browser) need cloud data to be available in a local cache, for both reads and updates. For optimal user experience and developer support, the cache should be consistent and fault-tolerant. In order to scale to high numbers of unreliable and resource-poor clients, and large database, the system needs to use resources sparingly. The SwiftCloud distributed object database is the first to provide fast reads and writes via a causally-consistent client-side local cache backed by the cloud. It is thrifty in resources and scales well, thanks to consistent versioning provided by the cloud, using small and bounded metadata. It remains available during faults, switching to a different data centre when the current one is not responsive, while maintaining its consistency guarantees. This paper presents the SwiftCloud algorithms, design, and experimental evaluation. It shows that client-side apps enjoy the high performance and availability, under the same guarantees as a remote cloud data store, at a small cost. |
Synchronization control of a class of memristor-based recurrent neural networks | Article history: Received 16 November 2010 Received in revised form 2 June 2011 Accepted 23 July 2011 Available online 6 August 2011 |
Perpendicular magnetic anisotropy in 70 nm CoFe2O4 thin films fabricated on SiO2/Si(1 0 0) by the sol–gel method | Abstract Cobalt ferrite CoFe 2 O 4 films were fabricated on SiO 2 /Si(1 0 0) by the sol–gel method. Films crystallized at/above 600 °C are stoichiometric as expected. With increase of the annealing temperature from 600 °C to 750 °C, the columnar grain size of CoFe 2 O 4 film increases from 13 nm to 50 nm, resulting in surface roughness increasing from 0.46 nm to 2.55 nm. Magnetic hysteresis loops in both in-plane and out-of-plane directions, at different annealing temperatures, indicate that the films annealed at 750 °C exhibit obvious perpendicular magnetic anisotropy. Simultaneously, with the annealing temperature increasing from 600 °C to 750 °C, the out of plane coercivity increases from 1 kOe to 2.4 kOe and the corresponding saturation magnetization increases from 200 emu/cm 3 to 283 emu/cm 3 . In addition, all crystallized films exhibit cluster-like structured magnetic domains. |
Smart City Components Architicture | The research is essentially to modularize the structure of utilities and develop a system for following up the activities electronically on the city scale. The GIS operational platform will be the base for managing the infrastructure development components with the systems interoperability for the available city infrastructure related systems. The concentration will be on the available utility networks in order to develop a comprehensive, common, standardized geospatial data models. The construction operations for the utility networks such as electricity, water, Gas, district cooling, irrigation, sewerage and communication networks; are need to be fully monitored on daily basis, in order to utilize the involved huge resources and man power. These resources are allocated only to convey the operational status for the construction and execution sections that used to do the required maintenance. The need for a system that serving the decision makers for following up these activities with a proper geographical representation will definitely reduce the operational cost for the long term. |
Modeling Bitcoin Contracts by Timed Automata | Bitcoin is a peer-to-peer cryptographic currency system. Since its introduction in 2008, Bitcoin has gained noticeable popularity, mostly due to its following properties: (1) the transaction fees are very low, and (2) it is not controlled by any central authority, which in particular means that nobody can “print” the money to generate inflation. Moreover, the transaction syntax allows to create the so-called contracts, where a number of mutually-distrusting parties engage in a protocol to jointly perform some financial task, and the fairness of this process is guaranteed by the properties of Bitcoin. Although the Bitcoin contracts have several potential applications in the digital economy, so far they have not been widely used in real life. This is partly due to the fact that they are cumbersome to create and analyze, and hence risky to use. In this paper we propose to remedy this problem by using the methods originally developed for the computer-aided analysis for hardware and software systems, in particular those based on the timed automata. More concretely, we propose a framework for modeling the Bitcoin contracts using the timed automata in the UPPAAL model checker. Our method is general and can be used to model several contracts. As a proof-of-concept we use this framework to model some of the Bitcoin contracts from our recent previous work. We then automatically verify their security in UPPAAL, finding (and correcting) some subtle errors that were difficult to spot by the manual analysis. We hope that our work can draw the attention of the researchers working on formal modeling to the problem of the Bitcoin contract verification, and spark off more research on this topic. |
Coherent line drawing | This paper presents a non-photorealistic rendering technique that automatically generates a line drawing from a photograph. We aim at extracting a set of coherent, smooth, and stylistic lines that effectively capture and convey important shapes in the image. We first develop a novel method for constructing a smooth direction field that preserves the flow of the salient image features. We then introduce the notion of flow-guided anisotropic filtering for detecting highly coherent lines while suppressing noise. Our method is simple and easy to implement. A variety of experimental results are presented to show the effectiveness of our method in producing self-contained, high-quality line illustrations. |
Stochastic Gradient Boosting | Gradient boosting constructs additive regression models by sequentially tting a simple parameterized function (base learner) to current \pseudo"{residuals by least{squares at each iteration. The pseudo{residuals are the gradient of the loss functional being minimized, with respect to the model values at each training data point, evaluated at the current step. It is shown that both the approximation accuracy and execution speed of gradient boosting can be substantially improved by incorporating randomization into the procedure. Speci cally, at each iteration a subsample of the training data is drawn at random (without replacement) from the full training data set. This randomly selected subsample is then used in place of the full sample to t the base learner and compute the model update for the current iteration. This randomized approach also increases robustness against overcapacity of the base learner. 1 Gradient Boosting In the function estimation problem one has a system consisting of a random \output" or \response" variable y and a set of random \input" or \explanatory" variables x = fx1; ; xng. Given a \training" sample fyi;xig N 1 of known (y;x){values, the goal is to nd a function F (x) that maps x to y, such that over the joint distribution of all (y;x){values, the expected value of some speci ed loss function (y; F (x)) is minimized F (x) = argmin F (x) Ey;x (y; F (x)): (1) Boosting approximates F (x) by an \additive" expansion of the form |
Titian: Data Provenance Support in Spark | Debugging data processing logic in Data-Intensive Scalable Computing (DISC) systems is a difficult and time consuming effort. Today's DISC systems offer very little tooling for debugging programs, and as a result programmers spend countless hours collecting evidence (e.g., from log files) and performing trial and error debugging. To aid this effort, we built Titian, a library that enables data provenance-tracking data through transformations-in Apache Spark. Data scientists using the Titian Spark extension will be able to quickly identify the input data at the root cause of a potential bug or outlier result. Titian is built directly into the Spark platform and offers data provenance support at interactive speeds-orders-of-magnitude faster than alternative solutions-while minimally impacting Spark job performance; observed overheads for capturing data lineage rarely exceed 30% above the baseline job execution time. |
Music listening as a means of stress reduction in daily life | The relation between music listening and stress is inconsistently reported across studies, with the major part of studies being set in experimental settings. Furthermore, the psychobiological mechanisms for a potential stress-reducing effect remain unclear. We examined the potential stress-reducing effect of music listening in everyday life using both subjective and objective indicators of stress. Fifty-five healthy university students were examined in an ambulatory assessment study, both during a regular term week (five days) and during an examination week (five days). Participants rated their current music-listening behavior and perceived stress levels four times per day, and a sub-sample (n = 25) additionally provided saliva samples for the later analysis of cortisol and alpha-amylase on two consecutive days during both weeks. Results revealed that mere music listening was effective in reducing subjective stress levels (p = 0.010). The most profound effects were found when 'relaxation' was stated as the reason for music listening, with subsequent decreases in subjective stress levels (p ≤ 0.001) and lower cortisol concentrations (p ≤ 0.001). Alpha-amylase varied as a function of the arousal of the selected music, with energizing music increasing and relaxing music decreasing alpha-amylase activity (p = 0.025). These findings suggest that music listening can be considered a means of stress reduction in daily life, especially if it is listened to for the reason of relaxation. Furthermore, these results shed light on the physiological mechanisms underlying the stress-reducing effect of music, with music listening differentially affecting the physiological stress systems. |
Mechanistic basis for the effects of process parameters on quality attributes in high shear wet granulation. | Three model compounds were used to study the effect of process parameters on in-process critical material attributes and a final product critical quality attribute. The effect of four process parameters was evaluated using design of experiment approach. Batches were characterized for particle size distribution, density (porosity), flow, compaction, and dissolution rate. The mechanisms of the effect of process parameters on primary granule properties (size and density) were proposed. Water amount showed significant effect on granule size and density. The effect of impeller speed was dependent on the granule mechanical properties and efficiency of liquid distribution in the granulator. Blend density was found to increase rapidly during wet massing. Liquid addition rate was the least consequential factor and showed minimal impact on granule density and growth. Correlations of primary properties with granulation bulk powder properties (compaction and flow) and tablet dissolution were also identified. The effects of the process parameters on the bulk powder properties and tablet dissolution were consistent with their proposed link to primary granule properties. Understanding the impact of primary granule properties on bulk powder properties and final product critical quality attributes provides the basis for modulating granulation parameters in order to optimize product performance. |
Improving web search ranking by incorporating user behavior information | We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance. |
Deep learning for undersampled MRI reconstruction | This paper presents a deep learning method for faster magnetic resonance imaging (MRI) by reducing k-space data with sub-Nyquist sampling strategies and provides a rationale for why the proposed approach works well. Uniform subsampling is used in the time-consuming phase-encoding direction to capture high-resolution image information, while permitting the image-folding problem dictated by the Poisson summation formula. To deal with the localization uncertainty due to image folding, a small number of low-frequency k-space data are added. Training the deep learning net involves input and output images that are pairs of the Fourier transforms of the subsampled and fully sampled k-space data. Our experiments show the remarkable performance of the proposed method; only 29[Formula: see text] of the k-space data can generate images of high quality as effectively as standard MRI reconstruction with the fully sampled data. |
60 MHz PMN-PT based 1-3 composite transducer for IVUS imaging | High frequency transducers are desired for usage in catheter-based IVUS (intravascular ultrasound) imaging, which is the cross-sectional visualization of the interior of blood vessels in the human body. High frequency intravascular imaging benefits from the use of high sensitivity and broad bandwidths as they allow for high resolution imaging at increased depth of field. The 40 MHz piezo-composite micromachined ultrasound transducer (PC-MUT) technique developed by the authors in the past years demonstrated that the limitations of traditional dice-and-fill methodology for the fabrication of 1-3 composites could be overcome by the application of deep reactive ion etching (DRIE) of PMN-PT single crystals. In this work, the fabrication and testing of higher frequency transducers and prototype catheters is presented. 1-3 PC-MUT composites were characterized and transducer devices with a target frequency of 60 MHz were prototyped and incorporated into functional catheters for imaging using a porcine animal model. |
Simultaneous self-calibration of a projector and a camera using structured light | We propose a method for geometric calibration of an active vision system, composed of a projector and a camera, using structured light projection. Unlike existing methods of self-calibration for projector-camera systems, our method estimates the intrinsic parameters of both the projector and the camera as well as extrinsic parameters except a global scale without any calibration apparatus such as a checker-pattern board. Our method is based on the decomposition of a radial fundamental matrix into intrinsic and extrinsic parameters. Dense and accurate correspondences are obtained utilizing structured light patterns consisting of Gray code and phase-shifting sinusoidal code. To alleviate the sensitivity issue in estimating and decomposing the radial fundamental matrix, we propose an optimization approach that guarantees the possible solution using a prior for the principal points. We demonstrate the stability of our method using several examples and evaluate the system quantitatively and qualitatively. |
Is the Distribution of Stock Returns Predictable ? | A large literature has considered predictability of the mean or volatility of stock returns but little is known about whether the distribution of stock returns more generally is predictable. We explore this issue in a quantile regression framework and consider whether a range of economic state variables are helpful in predicting different quantiles of stock returns representing left tails, right tails or shoulders of the return distribution. Many variables are found to have an asymmetric effect on the return distribution, affecting lower, central and upper quantiles very differently. Out-of-sample forecasts suggest that upper quantiles of the return distribution can be predicted by means of economic state variables although the center of the return distribution is more difficult to predict. Economic gains from utilizing information in time-varying quantile forecasts are demonstrated through portfolio selection and option trading experiments. ∗We thank Torben Andersen, Tim Bollerslev, Peter Christoffersen as well as seminar participants at HEC, University of Montreal, University of Toronto, Goldman Sachs and CREATES, University of Aarhus, for helpful comments. |
Dielectric resonators and filters | High-performance DRs and filters are widely used in wireless and satellite communication systems due to their superior characteristics, such as high unloaded Q, excellent temperature stability, and smaller size compared to their air-filled counterpart. Temperature-stable high-Q dielectric materials with a wide range of dielectric constants from 20 to 100 are available. The properties of a DR such as resonant frequency and unloaded Q can be accurately determined by full-wave EM simulations. TEoi single-mode filters offer the advantages of design simplicity and flexibility in layout options over HEn dual-mode filters, while dual-mode filters have significiint advantages in the mass and volume of the products. Mixing the TEoi and HE\\ modes, dielectric loaded resonator filters achieve the advantages of dual-mode HEn DR filters and the excellent spurious performances of the TEqi mode ring resonator filters. |
PRET DRAM controller: Bank privatization for predictability and temporal isolation | Hard real-time embedded systems employ high-capacity memories such as Dynamic RAMs (DRAMs) to cope with increasing data and code sizes of modern designs. However, memory controller design has so far largely focused on improving average-case performance. As a consequence, the latency of memory accesses is unpredictable, which complicates the worst-case execution time analysis necessary for hard real-time embedded systems.
Our work introduces a novel DRAM controller design that is predictable and that significantly reduces worst-case access latencies. Instead of viewing the DRAM device as one resource that can only be shared as a whole, our approach views it as multiple resources that can be shared between one or more clients individually. We partition the physical address space following the internal structure of the DRAM device, i.e., its ranks and banks, and interleave ac- cesses to the blocks of this partition. This eliminates contention for shared resources within the device, making accesses temporally predictable and temporally isolated. This paper describes our DRAM controller design and its integration with a precision-timed (PRET) architecture called PTARM. We present analytical bounds on the latency and throughput of the proposed controller, and confirm these via simulation. |
Asthmon: empowering asthmatic children's self-management with a virtual pet | Asthma is a common chronic childhood disease. Children spend a majority of their time in schools, and barriers to on-site asthma management have been reported. Previous forms of clinical intervention have regarded patients as passive subjects. However, self-management plays a significant role in caring for asthmatics. We consider asthmatic children and their parents, primary caregivers, as active participants in their treatment and care. To achieve this, we created Asthmon, a portable virtual pet that measures the lung capacity, and instructs appropriate actions to take. |
A New Travelling Wave Antenna in Microstrip | The radiation characteristics of the first higher order mode of microstrip lines are investigated. As a result, a simple travelling wave antenna element is scribed, having a larger bandwidth compared with resonator antennas. A method to excite the first higher order mode is shown. A single antenna element is treated theoretically and experimentally, and an array of four antenna elements is demonstrated. |
Generalized least squares multiple 3 D surface matching | A method for the simultaneous co-registration and georeferencing of multiple 3D pointclouds and associated intensity information is proposed. It is a generalization of the 3D surface matching problem. The simultaneous co-registration provides for a strict solution to the problem, as opposed to sequential pairwise registration. The problem is formulated as the Least Squares matching of overlapping 3D surfaces. The parameters of 3D transformations of multiple surfaces are simultaneously estimated, using the Generalized GaussMarkoff model, minimizing the sum of squares of the Euclidean distances among the surfaces. An observation equation is written for each surface-to-surface correspondence. Each overlapping surface pair contributes a group of observation equations to the design matrix. The parameters are introduced into the system as stochastic variables, as a second type of (fictitious) observations. This extension allows to control the estimated parameters. Intensity information is introduced into the system in the form of quasisurfaces as the third type of observations. Reference points, defining an external (object) coordinate system, which are imaged in additional intensity images, or can be located in the pointcloud, serve as the fourth type of observations. They transform the whole block of “models” to a unique reference system. Furthermore, the given coordinate values of the control points are treated as observations. This gives the fifth type of observations. The total system is solved by applying the Least Squares technique, provided that sufficiently good initial values for the transformation parameters are given. This method can be applied to data sets generated from aerial as well as terrestrial laser scanning or other pointcloud generating methods. * Corresponding author. www.photogrammetry.ethz.ch |
Prospective, randomized, single-blind comparison of effects of 6 months' treatment with atorvastatin versus pravastatin on leptin and angiogenic factors in patients with coronary artery disease | Leptin has been reported to exert an atherosclerotic effect by regulating expression of angiogenic factors that have been implicated in the pathogenesis of coronary artery disease (CAD). The purpose of this study was to investigate whether lipid-lowering therapy (LLT) with statins could affect leptin levels and angiogenic factors in patients with CAD. This study included 76 patients with CAD and 15 subjects without CAD (non-CAD). CAD patients were randomized to 6 months of intensive LLT with atorvastatin or moderate LLT with pravastatin. Plasma leptin, angiopoetin-2 (Ang-2), hepatocyte growth factor (HGF) and vascular endothelial growth factor (VEGF) levels were measured prior to statin therapy (baseline) and after 6 months. Baseline levels of leptin, Ang-2, HGF and VEGF were higher in the CAD group than in the non-CAD group (all P < 0.05). Treatment with intensive LLT decreased leptin, Ang-2, HGF and VEGF levels, whereas moderate LLT did not change these levels. This study suggests that LLT with atorvastatin decreases leptin levels and angiogenic factors in patients with CAD, possibly contributing to the beneficial effects of LLT with atorvastatin in CAD. |
To do or to have? That is the question. | Do experiences make people happier than material possessions? In two surveys, respondents from various demographic groups indicated that experiential purchases-those made with the primary intention of acquiring a life experience--made them happier than material purchases. In a follow-up laboratory experiment, participants experienced more positive feelings after pondering an experiential purchase than after pondering a material purchase. In another experiment, participants were more likely to anticipate that experiences would make them happier than material possessions after adopting a temporally distant, versus a temporally proximate, perspective. The discussion focuses on evidence that experiences make people happier because they are more open to positive reinterpretations, are a more meaningful part of one's identity, and contribute more to successful social relationships. |
Good enough practices in scientific computing | Computers are now essential in all branches of science, but most researchers are never taught the equivalent of basic lab skills for research computing. As a result, data can get lost, analyses can take much longer than necessary, and researchers are limited in how effectively they can work with software and data. Computing workflows need to follow the same practices as lab projects and notebooks, with organized data, documented steps, and the project structured for reproducibility, but researchers new to computing often don't know where to start. This paper presents a set of good computing practices that every researcher can adopt, regardless of their current level of computational skill. These practices, which encompass data management, programming, collaborating with colleagues, organizing projects, tracking work, and writing manuscripts, are drawn from a wide variety of published sources from our daily lives and from our work with volunteer organizations that have delivered workshops to over 11,000 people since 2010. |
Compiling Knowledge into Decomposable Negation Normal Form | We propose a method for compiling proposit ional theories into a new tractable form that we refer to as decomposable negation normal form (DNNF). We show a number of results about our compilation approach. First, we show that every propositional theory can be compiled into D N N F and present an algorithm to this effect. Second, we show that if a clausal form has a bounded treewidth, then its DNNF compilation has a linear size and can be computed in linear t ime — treewidth is a graphtheoretic parameter which measures the connectivity of the clausal form. Th i rd , we show that once a propositional theory is compiled into DNNF, a number of reasoning tasks, such as satisfiability and forgett ing, can be performed in linear t ime. Finally, we propose two techniques for approximating the DNNF compilat ion of a theory when the size of such compilation is too large to be practical. One of the techniques generates a sound but incomplete compilation, while the other generates a complete but unsound compilation. Together, these approximations bound the exact compilation from below and above in terms for their abil i ty to answer queries. |
Conventional oral anticoagulation may not replace prior transesophageal echocardiography for the patients with planned catheter ablation for atrial fibrillation | Preablation transesophageal echocardiography (TEE) is dispensable for the patients with planned catheter ablation for atrial fibrillation (AF) and having received at least a 3-week oral anticoagulation therapy according to the recommendations of the Venice Consensus. But the role of prior TEE and the effect of preablation short-term oral anticoagulation drugs (OACs) under the circumstance are still unclear. A total of 188 patients with planned catheter ablation for AF and without previous long-term oral anticoagulation, whose duration of AF exceeded 48 h, were randomly divided into receiving 3-week OACs (OACs group) before heparin bridging or receiving no prior OACs (N-OACs group). Follow-up was performed until a TEE had been performed on all the cases before ablation. Consequently, the prevalence of atrial thrombi is 6.3% and 11.7%, respectively (P < 0.05), and the prevalence of minor bleeding is 5.3% and 0%, respectively (P < 0.05), in OACs and N-OACs group. There was no thrombotic event, major hemorrhage, in both groups. After a 3-week effective oral anticoagulation, atrial thrombi could be resolved partly but not completely in the patients with AF who had not received long-term oral anticoagulation previously. To ensure safety, prior TEE may be necessary for the patients with planned catheter ablation for AF. |
Automation Architecture for Single Operator-Multiple UAV Command and Control | In light of the Office of the Secretary Defense’s Roadmap for unmanned aircraft systems (UASs), there is a critical need for research examining human interaction with heterogeneous unmanned vehicles. The OSD Roadmap clearly delineates the need to investigate the “appropriate conditions and requirements under which a single pilot would be allowed to control multiple airborne UA [unmanned aircraft] simultaneously”. Towards this end, in this paper, we provide a meta-analysis of research studies across unmanned aerial and ground vehicle domains that investigated single operator control of multiple vehicles. As a result, a hierarchical control model for single operator control of multiple unmanned vehicles (UV) is proposed that demonstrates those requirements that will need to be met for operator cognitive support of multiple UV control, with an emphasis on the introduction of higher levels of autonomy. The challenge in achieving effective management of multiple UV systems in the future is not only to determine if automation can be used to improve human and system performance, but how and to what degree across hierarchical control loops, as well as determining the types of decision support that will be needed by operators given the high workload environment. We address when and how increasing levels of automation should be incorporated in multiple UV systems and discuss the impact on not only human performance, but more importantly, system performance. |
A Novel Continuum Trunk Robot Based on Contractor Muscles | We describe the design, construction, and operation of a novel continuous backbone “continuum” robot. Inspired by biological trunks and tentacles, the robot is actuated by the pneumatic muscles which form its structure. In contrast to previous designs, the actuators are contractor muscles, which decrease in length as pressure is increased. This choice of actuator results in novel and improved performance with respect to previous pneumatically actuated trunk robots, particularly in use as an active hook. We detail the design process, discuss construction issues, and describe the results of initial experiments using the robot. Key-Words: Robotics, continuum, trunk, tentacle, pneumatic, McKibben muscles |
A clinical and economic evaluation of Control of Hyperglycaemia in Paediatric intensive care (CHiP): a randomised controlled trial. | BACKGROUND
Early research in adults admitted to intensive care suggested that tight control of blood glucose during acute illness can be associated with reductions in mortality, length of hospital stay and complications such as infection and renal failure. Prior to our study, it was unclear whether or not children could also benefit from tight control of blood glucose during critical illness.
OBJECTIVES
This study aimed to determine if controlling blood glucose using insulin in paediatric intensive care units (PICUs) reduces mortality and morbidity and is cost-effective, whether or not admission follows cardiac surgery.
DESIGN
Randomised open two-arm parallel group superiority design with central randomisation with minimisation. Analysis was on an intention-to-treat basis. Following random allocation, care givers and outcome assessors were no longer blind to allocation.
SETTING
The setting was 13 English PICUs.
PARTICIPANTS
Patients who met the following criteria were eligible for inclusion: ≥ 36 weeks corrected gestational age; ≤ 16 years; in the PICU following injury, following major surgery or with critical illness; anticipated treatment > 12 hours; arterial line; mechanical ventilation; and vasoactive drugs. Exclusion criteria were as follows: diabetes mellitus; inborn error of metabolism; treatment withdrawal considered; in the PICU > 5 consecutive days; and already in CHiP (Control of Hyperglycaemia in Paediatric intensive care).
INTERVENTION
The intervention was tight glycaemic control (TGC): insulin by intravenous infusion titrated to maintain blood glucose between 4.0 and 7.0 mmol/l.
CONVENTIONAL MANAGEMENT (CM)
This consisted of insulin by intravenous infusion only if blood glucose exceeded 12.0 mmol/l on two samples at least 30 minutes apart; insulin was stopped when blood glucose fell below 10.0 mmol/l.
MAIN OUTCOME MEASURES
The primary outcome was the number of days alive and free from mechanical ventilation within 30 days of trial entry (VFD-30). The secondary outcomes comprised clinical and economic outcomes at 30 days and 12 months and lifetime cost-effectiveness, which included costs per quality-adjusted life-year.
RESULTS
CHiP recruited from May 2008 to September 2011. In total, 19,924 children were screened and 1369 eligible patients were randomised (TGC, 694; CM, 675), 60% of whom were in the cardiac surgery stratum. The randomised groups were comparable at trial entry. More children in the TGC than in the CM arm received insulin (66% vs. 16%). The mean VFD-30 was 23 [mean difference 0.36; 95% confidence interval (CI) -0.42 to 1.14]. The effect did not differ among prespecified subgroups. Hypoglycaemia occurred significantly more often in the TGC than in the CM arm (moderate, 12.5% vs. 3.1%; severe, 7.3% vs. 1.5%). Mean 30-day costs were similar between arms, but mean 12-month costs were lower in the TGC than in CM arm (incremental costs -£3620, 95% CI -£7743 to £502). For the non-cardiac surgery stratum, mean costs were lower in the TGC than in the CM arm (incremental cost -£9865, 95% CI -£18,558 to -£1172), but, in the cardiac surgery stratum, the costs were similar between the arms (incremental cost £133, 95% CI -£3568 to £3833). Lifetime incremental net benefits were positive overall (£3346, 95% CI -£11,203 to £17,894), but close to zero for the cardiac surgery stratum (-£919, 95% CI -£16,661 to £14,823). For the non-cardiac surgery stratum, the incremental net benefits were high (£11,322, 95% CI -£15,791 to £38,615). The probability that TGC is cost-effective is relatively high for the non-cardiac surgery stratum, but, for the cardiac surgery subgroup, the probability that TGC is cost-effective is around 0.5. Sensitivity analyses showed that the results were robust to a range of alternative assumptions.
CONCLUSIONS
CHiP found no differences in the clinical or cost-effectiveness of TGC compared with CM overall, or for prespecified subgroups. A higher proportion of the TGC arm had hypoglycaemia. This study did not provide any evidence to suggest that PICUs should stop providing CM for children admitted to PICUs following cardiac surgery. For the subgroup not admitted for cardiac surgery, TGC reduced average costs at 12 months and is likely to be cost-effective. Further research is required to refine the TGC protocol to minimise the risk of hypoglycaemic episodes and assess the long-term health benefits of TGC.
TRIAL REGISTRATION
Current Controlled Trials ISRCTN61735247.
FUNDING
This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 18, No. 26. See the NIHR Journals Library website for further project information. |
Inter-Rater Reliability of Cognitive-Behavioral Case Formulations of Depression: A Replication | We developed a model of cognitive-behavioralcase formulation and tested several hypotheses abouttherapists' ability to use it to obtaincognitive-behavioral formulations of cases of depressedpatients. We tested whether clinicians, using measures wedeveloped, could correctly identify patients' overtproblems and agree on assessments of patients'underlyingschemas. Clinicians offered cognitive-behavioralformulations for three cases after listening to audiotapesof initial interviews with depressed women conducted bythe first author in her private practice. Therapistsidentified 67% of patients' overt problems. When schema ratings were averaged over five judges,interrater reliability was good (inter-rater reliabilitycoefficients averaged 0.72); single judges showed poorinter-rater agreement on schema ratings (inter-rater reliability coefficients averaged 0.37).Providing therapists with a specific context in which tomake ratings did notimprove schema agreement.Ph.D.-trained therapists were more accurate thannon-Ph.D.-trained therapists in identifying patients' problems.Most findings replicated those obtained in an earlierstudy. |
SPECT brain imaging in epilepsy: a meta-analysis. | UNLABELLED
A meta-analysis of SPECT brain imaging in epilepsy was performed to derive the sensitivity and specificity of interictal, postictal or ictal rCBF patterns to identify a seizure focus in medically refractory patients.
METHODS
Papers were obtained by pooling all published articles identified by two independent literature searches: (a) Dialnet (EMBASE) or Radline by CD-ROM and (b) Current Contents searched manually. Literature inclusion criteria were: (a) patients had a localization-related epileptic syndrome; (b) more than six patients were reported; and (c) patients had at least an interictal EEG-documented epileptiform abnormality. Of 46 papers meeting these criteria, 30 contained extractable data. SPECT results were compared to localization by standard diagnostic evaluation and surgical outcome. Meta-analytic sensitivities for SPECT localization in patients with temporal lobe seizures relative to diagnostic evaluation were 0.44 (interictal), 0.75 (postictal) and 0.97 (ictal). Similar results were obtained relative to surgical outcome. False-positive rates were low relative to diagnostic evaluation (7.4% for interictal and 1.5% for postictal studies) and surgical outcome (4.4% for interictal and 0.0% for postictal studies).
RESULTS
The results were not dependent on tracer used (or dose), the presence of CT-identified structural abnormalities, blinding of image interpretation or camera quality (although data were more variable with low-resolution cameras). There were insufficient data for conclusions regarding extratemporal-seizure or pediatric epilepsy populations.
CONCLUSION
Insights gained from reviewing this literature yielded recommendations for minimal information that should be provided in future reports. Additional recommendations regarding the nature and focus of future studies also are provided. The most important of these is that institutions using SPECT imaging in epilepsy should perform ictal, preferably, or postictal scanning in combination with interictal scanning. |
Combined intravenous and intra-arterial r-TPA versus intra-arterial therapy of acute ischemic stroke: Emergency Management of Stroke (EMS) Bridging Trial. | BACKGROUND AND PURPOSE
The purpose of this study was to test the feasibility, efficacy, and safety of combined intravenous (IV) and local intra-arterial (IA) recombinant tissue plasminogen activator (r-TPA) therapy for stroke within 3 hours of onset of symptoms.
METHODS
This was a double-blind, randomized, placebo-controlled multi-center Phase I study of IV r-TPA or IV placebo followed by immediate cerebral arteriography and local IA administration of r-TPA by means of a microcatheter. Treatment activity was assessed by improvement on the National Institutes of Health Stroke Scale Score (NIHSSS) at 7 to 10 days. The Barthel Index, modified Rankin Scale, and the Glasgow Outcome Scale measured 3-month functional outcome. Arterial recanalization rates and their relation to total r-TPA dose and time to lysis were measured. Rates of life-threatening bleeding, intracerebral hemorrhage (ICH), or other bleeding complications assessed safety.
RESULTS
Thirty-five patients were randomly assigned, 17 into the IV/IA group and 18 into the placebo/IA group. There was no difference in the 7- to 10-day or the 3-month outcomes, although there were more deaths in the IV/IA group. Clot was found in 22 of 34 patients. Recanalization was better (P=0. 03) in the IV/IA group with TIMI 3 flow in 6 of 11 IV/IA patients versus 1 of 10 placebo/IA patients and correlated to the total dose of r-TPA (P=0.05). There was no difference in the median treatment intervals from time of onset to IV treatment (2.6 vs 2.7 hours), arteriography (3.3 vs 3.0 hours), or clot lysis (6.3 vs 5.7 hours) between the IV/IA and placebo/IA groups, respectively. A direct relation between NIHSSS and the likelihood of the presence of a clot was identified. Eight ICHs occurred; all were hemorrhagic infarctions. There were no parenchymal hematomas. Symptomatic ICH within 24 hours occurred in 1 placebo/IA patient only. Beyond 24 hours, symptomatic ICH occurred in 2 IV/IA patients only. Life-threatening bleeding complications occurred in 2 patients, both in the IV/IA group. Moderate to severe bleeding complications occurred in 2 IV/IA patients and 1 placebo/IA patient.
CONCLUSIONS
This pilot study demonstrates combined IV/IA treatment is feasible and provides better recanalization, although it was not associated with improved clinical outcomes. The presence of thrombus on initial arteriography was directly related to the baseline NIHSSS. This approach is technically feasible. The numbers of symptomatic ICH were similar between the 2 groups, which suggests that this approach may be safe. Further study is needed to determine the safety and effectiveness of this new method of treatment. Such studies should address not only efficacy and safety but also the cost-benefit ratio and quality of life, given the major investment in time, personnel, and equipment required by combined IV and IA techniques. |
MARNA: multiple alignment and consensus structure prediction of RNAs based on sequence structure comparisons | MOTIVATION
Due to the importance of considering secondary structures in aligning functional RNAs, several pairwise sequence-structure alignment methods have been developed. They use extended alignment scores that evaluate secondary structure information in addition to sequence information. However, two problems for the multiple alignment step remain. First, how to combine pairwise sequence-structure alignments into a multiple alignment and second, how to generate secondary structure information for sequences whose explicit structural information is missing.
RESULTS
We describe a novel approach for multiple alignment of RNAs (MARNA) taking into consideration both the primary and the secondary structures. It is based on pairwise sequence-structure comparisons of RNAs. From these sequence-structure alignments, libraries of weighted alignment edges are generated. The weights reflect the sequential and structural conservation. For sequences whose secondary structures are missing, the libraries are generated by sampling low energy conformations. The libraries are then processed by the T-Coffee system, which is a consistency based multiple alignment method. Furthermore, we are able to extract a consensus-sequence and -structure from a multiple alignment. We have successfully tested MARNA on several datasets taken from the Rfam database. |
Biological feature validation of estimated gene interaction networks from microarray data: a case study on MYC in lymphomas | Gene expression is a dynamic process where thousands of components interact dynamically in a complex way. A major goal in systems biology/medicine is to reconstruct the network of components from microarray data. Here, we address two key aspects of network reconstruction: (i) ergodicity supports the interpretation of the measured data as time averages and (ii) confounding is an important aspect of network reconstruction. To elucidate these aspects, we explore a data set of 214 lymphoma patients with translocated or normal MYC gene. MYC (c-Myc) translocations to immunoglobulin heavy-chain (IGH@) or light-chain (IGK@, IGL@) loci lead to c-Myc overexpression and are widely believed to be the crucial initiating oncogenic events. There is a rich body of knowledge on the biological implications of the different translocations. In the context of these data, the article reflects the relationship between the biological knowledge and the results of formal statistical estimates of gene interaction networks. The article identifies key steps to provide a trustworthy biological feature validation: (i) analysing a medium-sized network as a subnet of a more extensive environment to avoid bias by confounding, (ii) the use of external data to demonstrate the stability and reproducibility of the derived structures, (iii) a systematic literature review on the relevant issue, (iv) use of structured knowledge from databases to support the derived findings and (v) a strategy for biological experiments derived from the findings in steps (i-iv). |
Unemployment Insurance and Entrepreneurship | Based on administrative registers from Norway, we examine how unemployment insurance (UI) and active labor market programs (ALMP) affect the transition rates from unemployment to regular employment and entrepreneurship. We find that the entrepreneurship hazard is highly responsive with respect to UI incentives, and that the probability of starting up a new business increases sharply around the time of UI exhaustion. We also find that while participation in ALMP has a positive impact on the employment hazard, it has no effect on entrepreneurship. We speculate that this reflects the programs' one-sided focus on job search rather than job creation. |
Indexing Query Graphs to Speedup Graph Query Processing | Subgraph/supergraph queries although central to graph analytics, are costly as they entail the NP-Complete problem of subgraph isomorphism. We present a fresh solution, the novel principle of which is to acquire and utilize knowledge from the results of previously executed queries. Our approach, iGQ, encompasses two component subindexes to identify if a new query is a subgraph/supergraph of previously executed queries and stores related key information. iGQ comes with novel query processing and index space management algorithms, including graph replacement policies. The end result is a system that leads to significant reduction in the number of required subgraph isomorphism tests and speedups in query processing time. iGQ can be incorporated into any sub/supergraph query processing method and help improve performance. In fact, it is the only contribution that can speedup significantly both subgraph and supergraph query processing. We establish the principles of iGQ and formally prove its correctness. We have implemented iGQ and have incorporated it within three popular recent state of the art index-based graph query processing solutions. We evaluated its performance using real-world and synthetic graph datasets with different characteristics, and a number of query workloads, showcasing its benefits. |
THE EVOLUTION OF ACCOUNTING INFORMATION SYSTEMS | Technological evolution becomes more and more a daily reality for businesses and individuals who use information systems as for supporting their operational activities. This article focuses on the way technological evolution changes the accounting practices, starting from the analysis of the traditional model and trying to determine future trends and arising challenges to face. From data input to consolidation and reporting, accountants’ function and operations are dissected in order to identify to what extent the development of new concepts, such as cloud computing, cloud accounting, real-time accounting or mobile accounting may affect the financial-accounting process, as well as the challenges that arise from the changing environment. SEA Practical Application of Science Volume III, Issue 1 (7) / 2015 92 Introduction Technology evolves rapidly as for responding to customers demand (Weiß and Leimeister, 2012). From a business perspective, more and more companies acknowledge the fact that technology may support process optimisation in terms of costs, lead time and involved resources. The actual market context is driving companies to continuously search for new ways to optimise their processes and increase their financial indicators (Christauskas and Miseviciene, 2012). The company’s efficiency is in direct relation with the objective and timely financial information provided by the accounting system, the main purpose of which is to collect and record information regarding transactions or events with economic impact upon the company, as well as to process and provide relevant, fair information to stakeholders (both internal and external) (Girdzijauskas, et al., 2008; Kundeliene, 2011). The accounting system is thus essential in providing reliable, relevant, significant and useful information for the users of financial data (Kalcinskaite, 2009). From the deployment of basic accounting program or the use of large ERP systems (Enterprise Resource Planning), to the creation of World-Wide Web and development of web-based communication, technology increased its development in speed and implications. Today, concepts such as cloud computing, cloud accounting or real-time reporting are more and more a daily reality among companies. But as technology evolves at an incredible speed, it is necessary for companies to adapt their processes and practices to the new trends. However, that could not be possible without the decision factors to completely acknowledge the implications of the new technology, how it can be used to better manage their businesses and what challenges it also brings along. The present study is based on the theoretical and empirical analysis of accounting process from the appearance of the first accounting information systems up to nowadays’ services and techniques available for supporting the accounting function of a company. The research comprised of analysing accounting operations, activities and practices as they followed the technological evolution for over than 20 years, focusing on a general level, applicable to companies of all sizes and business sectors. Although the geographic extent was limited to the European Union, the study may as well be considered globally applicable, considering the internationality of today’s technology (e.g. cloud computing services may be used by a company in Japan while the server is physically located in the USA). Traditional practices and future trends The accounting systems may be seen as aiming to support businesses in collecting, understanding and analysing the financial data (Chytilova et al., 2011). The evolution of software accounting generation may be split, according toPhillips (2012) into three major categories: ▪ 90’s era – marked by the apparition of the first accounting software programs under what is known as ‘the Windows age’; applications were solid, but only supporting basic accounting operations. ▪ 00’s era – ‘integration’ and ‘SaaS’ concepts took birth, bringing along more developed systems that would allow more complex accounting operations and data processing, as well as concurrent access to files and programs. ▪ 2010 – on-going – ‘Mobile’ accounting era, marked by real-time accounting, financial dashboards and other mobile applications supporting financial processing and reporting. The same author outlines the evolution of communication – if the traditional accounting model was based on e-mail or .ftp files communication, the technological evolution now allows sharing and concurrent access to data, through virtual platforms provided by cloud computing technology. Based on the types of accounting services available on the market, three major categories may be defined: ▪ On-premises accounting: a dedicated accounting software program is purchased by the company and installed using its own infrastructure. Investment in the software and hardware equipment is required for such programs. ▪ Hosted solutions: the logical access is remotely performed through the company’s installed programs; however the data centre is physically located in a different place, managed by a dedicated third party. Infrastructure costs are reduced for the company, as hardware is administered and maintained by the service provider. ▪ Cloud computing: the service could prove even more cost efficient for companies, as the data is managed through virtual platforms, and administered by a dedicated third party, allowing multi-tenancy of services in order to split fixed infrastructure costs between companies. Traditional accounting practices used to focus on bookkeeping and financial reporting, having as a final purpose the preparation and presentation of financial statements. The activities were driven by the need of financial information users (both internal and external) to gain a ‘fair view’ of the company. The objective was often SEA Practical Application of Science Volume III, Issue 1 (7) / 2015 93 reached through the use of small, atomic systems meant to support the reporting; nevertheless, collection of documents, processing of data and operations, posting of journal entries, as well as consolidation and final reporting operations were mostly manual, and manual controls (reconciliations, validations, etc.) were performed as systems did not communicate through automated interfaces. In early 1920s, the first outsourcing agreement was signed by British Petroleum with Accenture. Ever since, the accounting started changing its meaning within the companies, turning from the bookkeeping function to the management strategic and decision-making support function. The technological evolution gave birth in the late 80s to ERP (Enterprise Resource Planning) systems, used to incorporate and connect various organisational functions (accounting, asset management, operations, procurement, human resources, etc.) (Ziemba and Oblak, 2013). Ustasüleyman and Percin (2010) define the ERP systems as ‘software packages enabling the integration of business processes throughout an organisation’, while Salmeron andLopez (2010) see the ERP system as ‘a single software system allowing complete integration of information flow from all functional areas in companies by means of a single database, and accessible through a unified interface and communication channel’. The ERP systems became of common use among large companies who managed to reduce the process lead time and involved resources by automation of data transfer between ERP modules, processes within the ERP system, as well as validation and reconciliation controls. From an accounting perspective, the deployment of ERP systems represented a major change, offering support in bookkeeping (the operations performed through different modules would generate automated journal entries posting), processing and transfer of data (through automated interfaces between the ERP modules), as well as consolidation and reporting. This progress took the accountants one step away from the traditional accounting’s bookkeeping practices, and one step closer towards today’s role more close to management strategy and decision support. Further on, as a consequence of the financial-economic crisis started in 2008, the role of accountants within the company changed drastically from bookkeeping and reporting to playing an active and essential role in the management strategy and decision-making process. Thus, it was technology’s role to take in the traditional tasks and operations. More and more companies implemented automated controls for processing data, posting journal entries, consolidation and final reporting under the financial statements and other internal management reports. Soon after automating the accounting operations, technology evolved into also automating document collection and management, through development of concepts such as einvoicing, e-archiving, e-payments, etc. (Burinskiene&Burinskas, 2010). Technology proved once again responsive to the market’s demand, and thus accounting software easily customisable for each client’s particularities regarding the activity profile, accounting practices and chart of accounts, were built as for supporting the automation of accounting process. With the automation of the process, implementation of certain controls was also required as for ensuring the correctness and completeness of reported information. Technology also took into account the need for robust, transparent processes, and it was only a matter of time until cloud computing, cloud accounting or real-time reporting concepts became a daily reality among companies of all sizes, activity sectors or region/state. The access to financial information previously physically limited to the company’s premises (where the network and infrastructure would be located) was fairly improved through cloud computing to an extent where internet connection was the only condition users needed to respect in order to gain access to the financial programs and data. Cloud computing start |
Energy Management of Fuel Cell/Battery/Supercapacitor Hybrid Power Sources Using Model Predictive Control | Well known as an efficient and eco-friendly power source, fuel cell, unfortunately, offers slow dynamics. When attached as primary energy source in a vehicle, fuel cell would not be able to respond to abrupt load variations. Supplementing battery and/or supercapacitor to the system will provide a solution to this shortcoming. On the other hand, a current regulation that is vital for lengthening time span of the energy storage system is needed. This can be accomplished by keeping fuel cell's and batteries' current slope in reference to certain values, as well as attaining a stable dc output voltage. For that purpose, a feedback control system for regulating the hybrid of fuel cell, batteries, and supercapacitor was constructed for this study. Output voltage of the studied hybrid power sources (HPS) was administered by assembling three dc-dc converters comprising two bidirectional converters and one boost converter. Current/voltage output of fuel cell was regulated by boost converter, whereas the bidirectional converters regulated battery and supercapacitor. Reference current for each converter was produced using Model Predictive Control (MPC) and subsequently tracked using hysteresis control. These functions were done on a controller board of a dSPACE DS1104. Subsequently, on a test bench made up from 6 V, 4.5 Ah battery and 7.5 V, 120 F supercapacitor together with a fuel cell of 50 W, 10 A, experiment was conducted. Results show that constructing a control system to restrict fuel cell's and batteries' current slope and maintaining dc bus voltage in accordance with the reference values using MPC was feasible and effectively done. |
The effect of sound-based intervention on children with sensory processing disorders and visual-motor delays. | This study investigated the effects of a sensory diet and therapeutic listening intervention program, directed by an occupational therapist and implemented by parents, on children with sensory processing disorders (SPD) and visual-motor delays. A convenience sample was used of 10 participants, ages 5 to 11 years, with SPD and visual-motor delays. In the first phase, each participant completed a 4-week sensory diet program, then an 8-week therapeutic-listening and sensory diet program. The Sensory Profile was completed by the participants' parents before and after both study phases. The Draw-A-Person test, Developmental Test of Visual Motor Integration (VMI), and Evaluation Tool of Children's Handwriting (ETCH) were administered before and after each phase. Over 12 weeks, the participants exhibited significant improvement on the Sensory Profile, increasing a mean of 71 points. Parents reported improvements in their children's behaviors related to sensory processing. Scores on the VMI visual and ETCH legibility scales also improved more during the therapeutic listening phase. Therapeutic listening combined with a sensory diet appears effective in improving behaviors related to sensory processing in children with SPD and visual-motor impairments. |
Structured light in scattering media | Virtually all structured light methods assume that the scene and the sources are immersed in pure air and that light is neither scattered nor absorbed. Recently, however, structured lighting has found growing application in underwater and aerial imaging, where scattering effects cannot be ignored. In this paper, we present a comprehensive analysis of two representative methods - light stripe range scanning and photometric stereo - in the presence of scattering. For both methods, we derive physical models for the appearances of a surface immersed in a scattering medium. Based on these models, we present results on (a) the condition for object detectability in light striping and (b) the number of sources required for photometric stereo. In both cases, we demonstrate that while traditional methods fail when scattering is significant, our methods accurately recover the scene (depths, normals, albedos) as well as the properties of the medium. These results are in turn used to restore the appearances of scenes as if they were captured in clear air. Although we have focused on light striping and photometric stereo, our approach can also be extended to other methods such as grid coding, gated and active polarization imaging. |
Low Pressure Storage of Natural Gas for Vehicular Applications | Natural gas is an attractive fuel for vehicles because it is a relatively clean-burning fuel compared with gasoline. Moreover, methane can be stored in the physically adsorbed state [at a pressure of 3.5 MPa (500 psi)] at energy densities comparable to methane compressed at 24.8 MPa (3600 psi). Here we report the development of natural gas storage monoliths [1]. The monolith manufacture and activation methods are reported along with pore structure characterization data. The storage capacities of these monoliths are measured gravimetrically at a pressure of 3.5 MPa (500 psi) and ambient temperature, and storage capacities of >150 V/V have been demonstrated and are reported. |
Is the ‘Darwin-Marx correspondence’ authentic? | Summary For many years there has been a good deal of scholarly and ideological writing on the correspondence which is said to have taken place between Karl Marx and Charles Darwin. The two presumed letters from Charles Darwin to Karl Marx have been published several times, and their significance appraised. In this article their authenticity as letters to Marx is discussed and questioned, and the possibility that Edward Aveling is the addressee of at least one of them is argued. |
Predict Anchor Links across Social Networks via an Embedding Approach | Predicting anchor links across social networks has important implications to an array of applications, including cross-network information diffusion and cross-domain recommendation. One challenging problem is: whether and to what extent we can address the anchor link prediction problem, if only structural information of networks is available. Most existing methods, unsupervised or supervised, directly work on networks themselves rather than on their intrinsic structural regularities, and thus their effectiveness is sensitive to the high dimension and sparsity of networks. To offer a robust method, we propose a novel supervised model, called PALE, which employs network embedding with awareness of observed anchor links as supervised information to capture the major and specific structural regularities and further learns a stable cross-network mapping for predicting anchor links. Through extensive experiments on two realistic datasets, we demonstrate that PALE significantly outperforms the state-of-the-art methods. |
Semantic Segmentation from Limited Training Data | We present our approach for robotic perception in cluttered scenes that led to winning the recent Amazon Robotics Challenge (ARC) 2017. Next to small objects with shiny and transparent surfaces, the biggest challenge of the 2017 competition was the introduction of unseen categories. In contrast to traditional approaches which require large collections of annotated data and many hours of training, the task here was to obtain a robust perception pipeline with only few minutes of data acquisition and training time. To that end, we present two strategies that we explored. One is a deep metric learning approach that works in three separate steps: semantic-agnostic boundary detection, patch classification and pixel-wise voting. The other is a fully-supervised semantic segmentation approach with efficient dataset collection. We conduct an extensive analysis of the two methods on our ARC 2017 dataset. Interestingly, only few examples of each class are sufficient to fine-tune even very deep convolutional neural networks for this specific task. |
Graph Capsule Convolutional Neural Networks | Graph Convolutional Neural Networks (GCNNs) are the most recent exciting advancement in deep learning field and their applications are quickly spreading in multi-cross-domains including bioinformatics, chemoinformatics, social networks, natural language processing and computer vision. In this paper, we expose and tackle some of the basic weaknesses of a GCNN model with a capsule idea presented in (Hinton et al., 2011) and propose our Graph Capsule Network (GCAPS-CNN) model. In addition, we design our GCAPS-CNN model to solve especially graph classification problem which current GCNN models find challenging. Through extensive experiments, we show that our proposed Graph Capsule Network can significantly outperforms both the existing state-of-art deep learning methods and graph kernels on graph classification benchmark datasets. |
A Review of Models and Frameworks for Designing Mobile Learning Experiences and Environments. | Mobile learning has become increasingly popular in the past decade due to the unprecedented technological affordances achieved through the advancement of mobile computing, which makes ubiquitous and situated learning possible. At the same time, there have been research and implementation projects whose efforts centered on developing mobile learning experiences for various learners’ profiles, accompanied by the development of models and frameworks for designing mobile learning experiences. This paper focuses on categorizing and synthesizing models and frameworks targeted specifically on mobile learning. A total of 17 papers were reviewed, and the models or frameworks were divided into five categories and discussed: 1) pedagogies and learning environment design; 2) platform/system design; 3) technology acceptance; 4) evaluation; and 5) psychological construct. This paper provides a review and synthesis of the models/frameworks. The categorization and analysis can also help inform evaluation, design, and development of curriculum and environments for meaningful mobile learning experiences for learners of various demographics. |
Light-Field Surface Color Segmentation with an Application to Intrinsic Decomposition | To enable light fields of large environments to be captured, they would have to be sparse, i.e. with a relatively large distance between views. Such sparseness, however, causes subsequent processing to be much more difficult than would be the case with dense light fields. This includes segmentation. In this paper, we address the problem of meaningful segmentation of a sparse planar light field, leading to segments that are coherent between views. In addition, uniquely our method does not make the assumption that all surfaces in the environment are perfect Lambertian reflectors, which further broadens its applicability. Our fully automatic segmentation pipeline leverages scene structure, and does not require the user to navigate through the views to fix inconsistencies. The key idea is to combine coarse estimations given by an over-segmentation of the scene into super-rays, with detailed ray-based processing. We show the merit of our algorithm by means of a novel way to perform intrinsic light field decomposition, outperforming state-of-the-art methods. |
Intriguing Properties of Adversarial Examples | It is becoming increasingly clear that many machine learning classifiers are vulnerable to adversarial examples. In attempting to explain the origin of adversarial examples, previous studies have typically focused on the fact that neural networks operate on high dimensional data, they overfit, or they are too linear. Here we argue that the origin of adversarial examples is primarily due to an inherent uncertainty that neural networks have about their predictions. We show that the functional form of this uncertainty is independent of architecture, dataset, and training protocol; and depends only on the statistics of the logit differences of the network, which do not change significantly during training. This leads to adversarial error having a universal scaling, as a power-law, with respect to the size of the adversarial perturbation. We show that this universality holds for a broad range of datasets (MNIST, CIFAR10, ImageNet, and random data), models (including state-of-theart deep networks, linear models, adversarially trained networks, and networks trained on randomly shuffled labels), and attacks (FGSM, step l.l., PGD). Motivated by these results, we study the effects of reducing prediction entropy on adversarial robustness. Finally, we study the effect of network architectures on adversarial sensitivity. To do this, we use neural architecture search with reinforcement learning to find adversarially robust architectures on CIFAR10. Our resulting architecture is more robust to white and black box attacks compared to previous attempts. |
Endogenous intrahepatic IFNs and association with IFN-free HCV treatment outcome. | BACKGROUND. Hepatitis C virus (HCV) infects approximately 170 million people worldwide and may lead to cirrhosis and hepatocellular carcinoma in chronically infected individuals. Treatment is rapidly evolving from IFN-α-based therapies to IFN-α-free regimens that consist of directly acting antiviral agents (DAAs), which demonstrate improved efficacy and tolerability in clinical trials. Virologic relapse after DAA therapy is a common cause of treatment failure; however, it is not clear why relapse occurs or whether certain individuals are more prone to recurrent viremia. METHODS. We conducted a clinical trial using the DAA sofosbuvir plus ribavirin (SOF/RBV) and performed detailed mRNA expression analysis in liver and peripheral blood from patients who achieved either a sustained virologic response (SVR) or relapsed. RESULTS. On-treatment viral clearance was accompanied by rapid downregulation of IFN-stimulated genes (ISGs) in liver and blood, regardless of treatment outcome. Analysis of paired pretreatment and end of treatment (EOT) liver biopsies from SVR patients showed that viral clearance was accompanied by decreased expression of type II and III IFNs, but unexpectedly increased expression of the type I IFN IFNA2. mRNA expression of ISGs was higher in EOT liver biopsies of patients who achieved SVR than in patients who later relapsed. CONCLUSION. These results suggest that restoration of type I intrahepatic IFN signaling by EOT may facilitate HCV eradication and prevention of relapse upon withdrawal of SOF/RBV. TRIAL REGISTRATION. ClinicalTrials.gov NCT01441180. |
A Personalized Service for Scheduling Express Delivery Using Courier Trajectories | With the increasing demand for express delivery, a courier needs to deliver many tasks in one day and it's necessary to deliver punctually as the customers expect. At the same time, they want to schedule the delivery tasks to minimize the total time of a courier's one-day delivery, considering the total travel time. However, most of scheduling researches on express delivery focus on inter-city transportation, and they are not suitable for the express delivery to customers in the “last mile”. To solve the issue above, this paper proposes a personalized service for scheduling express delivery, which not only satisfies all the customers' appointment time but also makes the total time minimized. In this service, personalized and accurate travel time estimation is important to guarantee delivery punctuality when delivering shipments. Therefore, the personalized scheduling service is designed to consist of two basic services: (1) personalized travel time estimation service for any path in express delivery using courier trajectories, (2) an express delivery scheduling service considering multiple factors, including customers' appointments, one-day delivery costs, etc., which is based on the accurate travel time estimation provided by the first service. We evaluate our proposed service based on extensive experiments, using GPS trajectories generated by more than 1000 couriers over a period of two months in Beijing. The results demonstrate the effectiveness and efficiency of our method. |
Learning Semantic Correspondences with Less Supervision | A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state. To deal with the high degree of ambiguity present in this setting, we present a generative model that simultaneously segments the text into utterances and maps each utterance to a meaning representation grounded in the world state. We show that our model generalizes across three domains of increasing difficulty—Robocup sportscasting, weather forecasts (a new domain), and NFL recaps. |
Slideshow: Gesture-aware PPT presentation | Traditional keyboard and mouse based presentation prevents lecturers from interacting with the audiences freely and closely. In this paper, we propose a gesture-aware presentation tool named SlideShow to liberate lecturers from physical space constraints and make human-computer interaction more natural and convenient. In our system, gesture data is obtained by a handle controller with 3-axis accelerometer and gyro and transmitted to host-side through bluetooth, then we use Bayesian change point detection to segment continuous gesture series and HMM to recognize the gesture. In consequence Slideshow could carry out the corresponding operations on PowerPoint(PPT) to make a presentation, and operation states can be switched automatically and intelligently during the presentation. Both the experimental and testing results show our approach is practical, useful and convenient. |
Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/total flowtime of jobs | The problem of scheduling in permutation flowshops is considered with the objective of minimizing the makespan, followed by the consideration of minimization of total flowtime of jobs. Two ant-colony optimization algorithms are proposed and analyzed for solving the permutation flowshop scheduling problem. The first algorithm extends the ideas of the ant-colony algorithm by Stuetzle [Proceedings of the 6th European Congress on Intelligent Techniques and Soft Computing (EUFIT 98), vol. 3, Verlag Mainz, Aachen, Germany, 1998, p. 1560], called max–min ant system (MMAS), by incorporating the summation rule suggested by Merkle and Middendorf [Proceedings of the EvoWorkshops 2000, Lecture Notes in Computer Science No. 1803, Springer-Verlag, Berlin, 2000, p. 287] and a newly proposed local search technique. The second ant-colony algorithm is newly developed. The proposed ant-colony algorithms have been applied to 90 benchmark problems taken from Taillard [European Journal of Operational Research 64 (1993) 278]. First, a comparison of the solutions yielded by the MMAS and the two ant-colony algorithms developed in this paper, with the heuristic solutions given by Taillard [European Journal of Operational Research 64 (1993) 278] is undertaken with respect to the minimization of makespan. The comparison shows that the two proposed ant-colony algorithms perform better, on an average, than the MMAS. Subsequently, by considering the objective of minimizing the total flowtime of jobs, a comparison of solutions yielded by the proposed ant-colony algorithms with the best heuristic solutions known for the benchmark problems, as published in an extensive study by Liu and Reeves [European Journal of Operational Research 132 (2001) 439], is carried out. The comparison shows that the proposed ant-colony algorithms are clearly superior to the heuristics analyzed by Liu and Reeves. For 83 out of 90 problems considered, better solutions have been found by the two proposed ant-colony algorithms, as compared to the solutions reported by Liu and Reeves. 2003 Elsevier B.V. All rights reserved. |
State of the Art Review of Mobile Payment Technology | Mobile payments will gain significant traction in the coming years as the mobile and payment technologies mature and become widely available. Various technologies are competing to become the established standards for physical and virtual mobile payments, yet it is ultimately the users who will determine the level of success of the technologies through their adoption. Only if it becomes easier and cheaper to transact business using mobile payment applications than by using conventional methods will they become popular, either with users or providers. This document is a state of the art review of mobile payment technologies. It covers all of the technologies involved in a mobile payment solution, including mobile networks in section 2, mobile services in section 3, mobile platforms in section 4, mobile commerce in section 5 and different mobile payment solutions in sections 6 to 8. |
Tolerance of a Salsola kali extract standardized in biological units administered by subcutaneous route. Multicenter study. | BACKGROUND
Sensitivity to Salsola kali is a frequent cause of allergic respiratory disease in various regions of Spain. However, there are very few articles in which this allergen has been studied.
METHODS AND RESULTS
In order to evaluate the tolerance of this extract, a prospective study has been performed. This study was observational, multi-centred and open, involving 88 patients with allergic respiratory disease due to sensitivity to Salsola, aged between 5 and 52 years. The administration of the extract was performed subcutaneously, through one of two treatment schedules: cluster (8 doses in 4 visits) or conventional (13 doses in 12 visits). A total of 42 adverse reactions were registered, in 26 patients (35 local reactions in 21 patients and 7 systemic reactions in 6 patients). Among the 7 systemic reactions, 4 were registered with the cluster protocol and 2 with the conventional protocol (p = 0.329). In no patients were serious adverse reactions registered.
CONCLUSION
The subcutaneous administration of a Salsola extract is safe and well tolerated, both when administered using a conventional schedule and when using a cluster schedule. |
Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition | In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine – synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one “reading” words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs. |
The Philosophy of Information | and streamlined conditions of possibility of the available narratives, with a view not only to their explanation, but also their modification and innovation. How has the regress developed? The vulgata suggests that the scientific revolution made seventeenth-century philosophers redirect their attention from the nature of the knowable object to the epistemic relation between it and the knowing subject, and hence from metaphysics to epistemology. Th e subsequent growth of the information society and the appearance of the infosphere (the semantic environment which millions of people inhabit nowadays) led contemporary philosophy to privilege critical reflection on the domain represented by the memory and languages of organized knowledge, the instruments whereby the infosphere is modeled and managed – thus moving from epistemology to philosophy of language and logic (Dummett 1993) – and then on the nature of its very fabric and essence, information itself. Information has thus arisen as a concept as fundamental and important as “being,” “knowledge,” “life,” “intelligence,” “meaning,” or “good and evil” – all pivotal concepts with which it is interdependent – and so equally worthy of autonomous investigation. It is also a more basic concept, in terms of which the others can be expressed and interrelated, when not defined. In this sense, Evans was right: Evans had the idea that there is a much cruder and more fundamental concept than that of knowledge on which philosophers have concentrated so much, namely the concept of information. Information is conveyed by perception, and retained by memory, though also transmitted by means of language. One needs to concentrate on that concept before one approaches that of knowledge in the proper sense. Information is acquired, for example, without one’s necessarily having a grasp of the proposition which embodies it; the flow of information operates at a much more basic level than the acquisition and transmission of knowledge. I think that this conception deserves to be explored. It’s not one that ever occurred to me before I read Evans, but it is probably fruitful. That also distinguishes this work very sharply from traditional epistemology. (Dummett 1993: 186) This is why PI can be introduced as a forthcoming philosophia prima, both in the Aristotelian sense of the primacy of its object, information, which PI claims to be a fundamental component in any environment, and in the Cartesian-Kantian sense of the primacy of its methodology and problems, since PI aspires to provide a most valuable, comprehensive approach to philosophical investigations. PI, understood as a foundational philosophy of information modeling and design, can explain and guide the purposeful construction of our intellectual environment, and provide the systematic treatment of the conceptual foundations of contemporary society. It enables humanity to make sense of the world and construct it responsibly, reaching a new stage in the semanticization of being. If what has been suggested here is correct, the current development of PI may be delayed but remains inevitable, and it will affect the overall way in which we address both new and old philosophical problems, bringing about a substantial innovation of the philosophical system. This will represent the information turn in philosophy. Clearly, PI promises to be one of the most exciting and fruitful areas of philosophical research of our time. |
Bismuth Borate Glasses for Photonic Applications: A Review | This paper presents that the rare earth ions doped Bismuth Borate glasses possess excellent structural, optical, thermal properties that make it a strong candidate for photonic devices. Several research papers concluded their physical properties and JuddOfelt intensity parameter ( ) , , 6 4 2 . These values used to analyze the emission spectrum of the glass matrix. Ultraviolet to Near Infrared measurements evaluated for the glass samples. CIE 1931 color chromaticity coordinates diagram have been confirms the emission color of the glass. Elastic and bulk properties analyzed by ultrasonic velocity method. Elastic moduli and atomic vibrations determined by the Debye temperature (θD) and it is plays an important role solid state materials. We aim to provide a review of bismuth borate glass materials for light emitting diode and laser sources. |
IDS rainStorm: visualizing IDS alarms | The massive amount of alarm data generated from intrusion detection systems is cumbersome for network system administrators to analyze. Often, important details are overlooked and it is difficult to get an overall picture of what is occurring in the network by manually traversing textual alarm logs. We have designed a novel visualization to address this problem by showing alarm activity within a network. Alarm data is presented in an overview where system administrators can get a general sense of network activity and easily detect anomalies. They then have the option of zooming and drilling down for details. The information is presented with local network IP (Internet Protocol) addresses plotted over multiple yaxes to represent the location of alarms. Time on the x-axis is used to show the pattern of the alarms and variations in color encode the severity and amount of alarms. Based on our system administrator requirements study, this graphical layout addresses what system administrators need to see, is faster and easier than analyzing text logs, and uses visualization techniques to effectively scale and display the data. With this design, we have built a tool that effectively uses operational alarm log data generated on the Georgia Tech campus network. The motivation and background of our design is presented along with examples that illustrate its usefulness. CR Categories: C.2.0 [Computer-Communication Networks]: General—Security and Protection C.2.3 [ComputerCommunication Networks]: Network Operations—Network Monitoring H.5.2 [Information Systems]: Information Interfaces and Presentation—User Interfaces |
Social Capital within the Context of Business Theory Evolution | Schumpeter approves that the entrepreneurship the primary engine in economic development. Hayek and Kirzner emphasized the role of entrepreneurs acquiring and processing information. Drucker goes deep into classification of entrepreneurial opportunities and provides practical advice to entrepreneurs. Herbert and Link have identified three distinct intellectual traditions in the development of the entrepreneurship literature: l) The German Tradition, based on von Thuenen and Schumpeter; 2) The Chicago Tradition, based on Knight and Schultz; 3) The Austrian Tradition, based on von Mises, Kirzner and Shackle. The first models of measurement and an estimation of activity of the organizations concerned financial parameters. Edvinsson argues intellectual capital consist from human capital and structural capital. McElroy adds the social capital. In the contemporary academic literature, social capital is discussed in two related ways. The first, primarily associated with sociologists Burt, Lin, and Fortes, refers to the resources that individuals are able to procure by virtue of their relationships with other people. The second approach to social capital, one most closely associated with Putnam, refers to the nature and extent of one's involvement in various informal networks and formal civic organizations. In recent years, some scholars have suggested a third conceptual classification. Called "linking" social capital, this dimension refers to one's ties to people in positions of authority, such as representatives of public and private institutions. |
Quantum-Dot Semiconductor Optical Amplifiers | This paper reviews the recent progress of quantum-dot semiconductor optical amplifiers developed as ultrawideband polarization-insensitive high-power amplifiers, high-speed signal regenerators, and wideband wavelength converters. A semiconductor optical amplifier having a gain of > 25 dB, noise figure of < 5 dB, and 3-dB saturation output power of > 20 dBm, over the record widest bandwidth of 90 nm among all kinds of optical amplifiers, and also having a penalty-free output power of 23 dBm, the record highest among all the semiconductor optical amplifiers, was realized by using quantum dots. By utilizing isotropically shaped quantum dots, the TM gain, which is absent in the standard Stranski-Krastanow QDs, has been drastically enhanced, and nearly polarization-insensitive SOAs have been realized for the first time. With an ultrafast gain response unique to quantum dots, an optical regenerator having receiver-sensitivity improving capability of 4 dB at a BER of 10-9 and operating speed of > 40 Gb/s has been successfully realized with an SOA chip. This performance achieved together with simplicity of structure suggests a potential for low-cost realization of regenerative transmission systems. |
Automatic Localization of Indoor Soccer Players from Multiple Cameras | Nowadays, there is an ever growing quest for finding sophisticated performance evaluation tools by team sports that could give them an additional inch or a quarter of a second of advantage in a competition. Using cameras to shoot the events of a game, for instance, the teams can analyze the performance of the athletes and even extrapolate the data to obtain semantical information about the behavior of the teams themselves at relatively low costs. In this context, this paper introduces a new approach for better estimating the positions of indoor soccer players using multiple cameras at all moments of a game. The setup consists of four stationary cameras set around the soccer court. Our solution relies on individual object detectors (one per camera) working in the image coordinates and a robust fusion approach working in the world coordinates in a plane that represents the soccer court. The fusion approach relies on a gradient ascent algorithm over a multimodal bidimensional mixture of Gaussians function representing all the players in the soccer court. In the experiments, we show that the proposed solution improves standard object detector approaches and greatly reduces the mean error rate of soccer player detection to a few centimeters with respect to the actual positions of the players. |
NavyTime: Event and Time Ordering from Raw Text | This paper describes a complete event/time ordering system that annotates raw text with events, times, and the ordering relations between them at the SemEval-2013 Task 1. Task 1 is a unique challenge because it starts from raw text, rather than pre-annotated text with known events and times. A working system first identifies events and times, then identifies which events and times should be ordered, and finally labels the ordering relation between them. We present a split classifier approach that breaks the ordering tasks into smaller decision points. Experiments show that more specialized classifiers perform better than few joint classifiers. The NavyTime system ranked second both overall and in most subtasks like event extraction and relation labeling. |
Probability of Random Correspondence for Fingerprints | Individuality of fingerprints can be quantified by computing the probabilistic metrics for measuring the degree of fingerprint individuality. In this paper, we present a novel individuality evaluation approach to estimate the probability of random correspondence (PRC). Three generative models are developed respectively to represent the distribution of fingerprint features: ridge flow, minutiae and minutiae together with ridge points. A mathematical model that computes the PRCs are derived based on the generative models. Three metrics are discussed in this paper: (i) PRC of two samples, (ii) PRC among a random set of n samples (nPRC) and (iii) PRC between a specific sample among n others (specific nPRC). Experimental results show that the theoretical estimates of fingerprint individuality using our model consistently follow the empirical values based on the NIST4 database. |
Structured AutoEncoders for Subspace Clustering | Existing subspace clustering methods typically employ shallow models to estimate underlying subspaces of unlabeled data points and cluster them into corresponding groups. However, due to the limited representative capacity of the employed shallow models, those methods may fail in handling realistic data without the linear subspace structure. To address this issue, we propose a novel subspace clustering approach by introducing a new deep model—Structured AutoEncoder (StructAE). The StructAE learns a set of explicit transformations to progressively map input data points into nonlinear latent spaces while preserving the local and global subspace structure. In particular, to preserve local structure, the StructAE learns representations for each data point by minimizing reconstruction error with respect to itself. To preserve global structure, the StructAE incorporates a prior structured information by encouraging the learned representation to preserve specified reconstruction patterns over the entire data set. To the best of our knowledge, StructAE is one of the first deep subspace clustering approaches. Extensive experiments show that the proposed StructAE significantly outperforms 15 state-of-the-art subspace clustering approaches in terms of five evaluation metrics. |
A Data-Driven Stochastic Method for Elliptic PDEs with Random Coefficients | We propose a data-driven stochastic method (DSM) to study stochastic partial differential equations (SPDEs) in the multiquery setting. An essential ingredient of the proposed method is to construct a data-driven stochastic basis under which the stochastic solutions to the SPDEs enjoy a compact representation for a broad range of forcing functions and/or boundary conditions. Our method consists of offline and online stages. A data-driven stochastic basis is computed in the offline stage using the Karhunen–Loève (KL) expansion. A two-level preconditioning optimization approach and a randomized SVD algorithm are used to reduce the offline computational cost. In the online stage, we solve a relatively small number of coupled deterministic PDEs by projecting the stochastic solution into the data-driven stochastic basis constructed offline. Compared with a generalized polynomial chaos method (gPC), the ratio of the computational complexities between DSM (online stage) and gPC is of order O((m/Np) ). Herem andNp are the numbers of elements in the basis used in DSM and gPC, respectively. Typically we expect m Np when the effective dimension of the stochastic solution is small. A timing model, which takes into account the offline computational cost of DSM, is constructed to demonstrate the efficiency of DSM. Applications of DSM to stochastic elliptic problems show considerable computational savings over traditional methods even with a small number of queries. We also provide a method for an a posteriori error estimate and error correction. |
Comparison of enrollees and decliners of Parkinson disease sham surgery trials. | Concerns have been raised that persons with serious illnesses participating in high-risk research, such as PD patients in sham surgery trials, have unrealistic expectations and are vulnerable to exploitation. A comparison of enrollees and decliners of such research may provide insights about the adequacy of decision making by potential subjects. We compared 61 enrollees and 10 decliners of two phase II neurosurgical intervention (i.e., cellular and gene transfer) trials for PD regarding their demographic and clinical status, perceptions and attitudes regarding research risks, potential direct benefit, and societal benefit, and perspectives on the various potential reasons for and against participation. In addition to bivariate analyses, a logistic regression model examined variables regarding risks and benefits as predictors of participation status. Enrollees perceived lower risk of harm while tolerating higher risk of harm and were more action oriented, but did not have more advanced disease. Both groups rated hope for benefit as a strong reason to participate, whereas the fact that the study's purpose was not solely to benefit them was rated as "not a reason" against participation. Hope for benefit and altruism were rated higher than expectation of benefit as reasons in favor of participation for both groups. Enrollees and decliners are different in their views and attitudes toward risk. Although both are attracted to research because of hopes of personal benefit, this hope is clearly distinguishable from an expectation of benefit and does not imply a failure to understand the main purpose of research. |
Low-power 4-2 and 5-2 compressors | This paper explores various low power higher order compressors such as 4-2 and 5-2 compressor units. These compressors are building blocks for binary multipliers. Various circuit architectures for 4-2 compressors are compared with respect to their delay and power consumption. The different circuits are simulated using HSPICE. A new circuit for a 5-2 compressor is then presented which is 12% faster and consumes 37% less power. |
A Distributed Newton Method for Large Scale Consensus Optimization | In this paper, we propose a distributed Newton method for consensus optimization. Our approach outperforms state-of-the-art methods, including ADMM. The key idea is to exploit the sparsity of the dual Hessian and recast the computation of the Newton step as one of efficiently solving symmetric diagonally dominant linear equations. We validate our algorithm both theoretically and empirically. On the theory side, we demonstrate that our algorithm exhibits superlinear convergence within a neighborhood of optimality. Empirically, we show the superiority of this new method on a variety of machine learning problems. The proposed approach is scalable to very large problems and has a low communication overhead. |
A NOVEL SOFT-SWITCHING BUCK CONVERTER WITH COUPLED INDUCTOR Josna | A novel topology for a soft-switching buck dc– dc converter with a coupled inductor is proposed. The softswitching buck converter has advantages over the traditional hardswitching converters. The most significant advantage is that it offers a lower switching loss. This converter operates under a zero-current switching condition at turn on and a zerovoltage switching condition at turn off. It presents the circuit configuration with a least components for realizing soft switching. Because of soft switching, the proposed converter can attain a high efficiency under heavy load conditions. Likewise, a high efficiency is also attained under light load conditions, which is significantly different from other soft switching buck converters. Keywords— Buck converter, coupled inductor, soft switching, zero-current switching (ZCS), zero-voltage switching (ZVS). |
Linear gate assignment: a fast statistical mechanics approach | This paper deals with the problem of linear gate assignment in two layout styles: onedimensional logic array, and gate matrix layout. The goal is to find the optimal sequencing of gates in order to minimize the required number of tracks, and thus to reduce the overall circuit layout area. This is known to be an NP-Hard optimization problem, for whose solution no absolute approximation algorithm exists. Here we report the use of a new optimization heuristic derived from statistical mechanics the microcanonical optimization algorithm, μO to solve the linear gate assignment problem. Our numerical results show that μO compares favorably with at least five previously employed heuristics: simulated annealing, the unidirectional and the bidirectional Hong construction methods, and the artificial intelligence heuristics GM_Plan and GM_Learn. Moreover, in a massive set of experiments with circuits whose optimal layout is not known, our algorithm has been able to match and even to improve, by as much as 7 tracks, the best solutions known so far. TCAD |
Epstein–Barr virus promotes genomic instability in Burkitt's lymphoma | Epstein–Barr virus (EBV) has been implicated in the pathogenesis of human malignancies but the mechanisms of oncogenesis remain largely unknown. Genomic instability and chromosomal aberrations are hallmarks of malignant transformation. We report that EBV carriage promotes genomic instability in Burkitt's lymphoma (BL). Cytogenetic analysis of EBV− and EBV+ BL lines and their sublines derived by EBV conversion or spontaneous loss of the viral genome revealed a significant increase in dicentric chromosomes, chromosome fragments and chromatid gaps in EBV-carrying cells. Expression of EBV latency I was sufficient for this effect, whereas a stronger effect was observed in cells expressing latency III. Telomere analysis by fluorescent in situ hybridization revealed an overall increase of telomere size and prevalence of telomere fusion and double strand-break fusion in dicentric chromosomes from EBV+ cells. Phosphorylated H2AX, a reporter of DNA damage and ongoing repair, was increased in virus-carrying cells in the absence of exogenous stimuli, whereas efficient activation of DNA repair was observed in both EBV+ and EBV− cells following treatment with etoposide. These findings point to induction of telomere dysfunction and DNA damage as important mechanisms for EBV oncogenesis. |
Control of a humanoid robot by a noninvasive brain-computer interface in humans. | We describe a brain-computer interface for controlling a humanoid robot directly using brain signals obtained non-invasively from the scalp through electroencephalography (EEG). EEG has previously been used for tasks such as controlling a cursor and spelling a word, but it has been regarded as an unlikely candidate for more complex forms of control owing to its low signal-to-noise ratio. Here we show that by leveraging advances in robotics, an interface based on EEG can be used to command a partially autonomous humanoid robot to perform complex tasks such as walking to specific locations and picking up desired objects. Visual feedback from the robot's cameras allows the user to select arbitrary objects in the environment for pick-up and transport to chosen locations. Results from a study involving nine users indicate that a command for the robot can be selected from four possible choices in 5 s with 95% accuracy. Our results demonstrate that an EEG-based brain-computer interface can be used for sophisticated robotic interaction with the environment, involving not only navigation as in previous applications but also manipulation and transport of objects. |
Dopamine and gamma band synchrony in schizophrenia--insights from computational and empirical studies. | Dopamine modulates cortical circuit activity in part through its actions on GABAergic interneurons, including increasing the excitability of fast-spiking interneurons. Though such effects have been demonstrated in single cells, there are no studies that examine how such mechanisms may lead to the effects of dopamine at a neural network level. With this motivation, we investigated the effects of dopamine on synchronization in a simulated neural network composed of excitatory and fast-spiking inhibitory Wang-Buzsaki neurons. The effects of dopamine were implemented through varying leak K+ conductance of the fast-spiking interneurons and the network synchronization within the gamma band (∼40 Hz) was analyzed. Parametrically varying the leak K+ conductance revealed an inverted-U shaped relationship, with low gamma band power at both low and high conductance levels and optimal synchronization at intermediate conductance levels. We also examined the effects of modulating excitability of the inhibitory neurons more generically using an idealized model with theta neurons, with similar findings. Moreover, such a relationship holds when the external input is both tonic and periodic. Our computational results mirror our empirical study of dopamine modulation in schizophrenia and healthy controls, which showed that amphetamine administration increased gamma power in patients but decreased it in controls. Together, our computational and empirical investigations indicate that dopamine can modulate cortical gamma band synchrony in an inverted-U fashion and that the physiologic effects of dopamine on single fast-spiking interneurons can give rise to such non-monotonic effects at the network level. |
Dutch Disease Scare in Kazakhstan: Is it Real? | In this paper we explore the evidence that would establish that Dutch disease is at work in, or poses a threat to, the Kazakh economy. Assessing the mechanism by which fluctuations in the price of oil can damage non-oil manufacturing—and thus long-term growth prospects in an economy that relies heavily on oil production—we find that non-oil manufacturing has so far been spared the perverse effects of oil price increases from 1996 to 2005. The real exchange rate in the open sector has appreciated over the last couple of years, largely due to the appreciation of the nominal exchange rate. We analyze to what extent this appreciation is linked to movements in oil prices and oil revenues. Econometric evidence from the monetary model of the exchange rate and a variety of real exchange rate models show that the rise in the price of oil and in oil revenues might be linked to an appreciation of the U.S. dollar exchange rate of the oil and non-oil sectors. But appreciation is mainly limited to the real effective exchange rate for oil sector and is statistically insignificant for non-oil manufacturing. JEL Code: F31, F36, O11. Balázs Égert Austrian National Bank Foreign Research Division Otto-Wagner-Platz 3 1090 Vienna Austria [email protected] Carol S. Leonard St Antony’s College University of Oxford 62 Woodstock Road Oxford OX2 6JF United Kingdom [email protected] We would like to thank Jesús Crespo-Cuaresma, Sabit Khakimzhanov, Iskander Karibzhanov, Mathilde Maurel, Saulesh Yessenova, participants at a seminar at the Oesterreichische Nationalbank and at the workshop on the impact of the oil boom in the Caspian Basin held at the University of Paris I-Sorbonne in June 2006 and four anonymous referees for helpful comments and suggestions. We are also indebted to Karlygash Kuralbayeva for help in collecting some of the data used in the paper and to Dagmar Dichtl for language advice. |
ICDAR 2013 Robust Reading Competition | This report presents the final results of the ICDAR 2013 Robust Reading Competition. The competition is structured in three Challenges addressing text extraction in different application domains, namely born-digital images, real scene images and real-scene videos. The Challenges are organised around specific tasks covering text localisation, text segmentation and word recognition. The competition took place in the first quarter of 2013, and received a total of 42 submissions over the different tasks offered. This report describes the datasets and ground truth specification, details the performance evaluation protocols used and presents the final results along with a brief summary of the participating methods. |
Exploring large scale data for multimedia QA: an initial study | With the explosive growth of multimedia contents on the internet, multimedia search has become more and more important. However, users are often bewildered by the vast quantity of information content returned by the search engines. In this scenario, Multimedia Question-Answering (MMQA) emerges as a way to return precise answers by leveraging advanced media content and linguistic analysis as well as domain knowledge. This paper performs an initial study on exploring large scale data for MMQA. First, we construct a web video dataset and discuss its query strategy, statistics, feature description and groundtruth. We then conduct experiments based on the dataset to answer definition event questions using three schemes. We finally conclude the study with discussion for future work. |
Generalist genes and learning disabilities: a multivariate genetic analysis of low performance in reading, mathematics, language and general cognitive ability in a sample of 8000 12-year-old twins. | BACKGROUND
Our previous investigation found that the same genes influence poor reading and mathematics performance in 10-year-olds. Here we assess whether this finding extends to language and general cognitive disabilities, as well as replicating the earlier finding for reading and mathematics in an older and larger sample.
METHODS
Using a representative sample of 4000 pairs of 12-year-old twins from the UK Twins Early Development Study, we investigated the genetic and environmental overlap between internet-based batteries of language and general cognitive ability tests in addition to tests of reading and mathematics for the bottom 15% of the distribution using DeFries-Fulker extremes analysis. We compared these results to those for the entire distribution.
RESULTS
All four traits were highly correlated at the low extreme (average group phenotypic correlation = .58). and in the entire distribution (average phenotypic correlation = .59). Genetic correlations for the low extreme were consistently high (average = .67), and non-shared environmental correlations were modest (average = .23). These results are similar to those seen across the entire distribution (.68 and .23, respectively).
CONCLUSIONS
The 'Generalist Genes Hypothesis' holds for language and general cognitive disabilities, as well as reading and mathematics disabilities. Genetic correlations were high, indicating a strong degree of overlap in genetic influences on these diverse traits. In contrast, non-shared environmental influences were largely specific to each trait, causing phenotypic differentiation of traits. |
Stretchable multichannel antennas in soft wireless optoelectronic implants for optogenetics. | Optogenetic methods to modulate cells and signaling pathways via targeted expression and activation of light-sensitive proteins have greatly accelerated the process of mapping complex neural circuits and defining their roles in physiological and pathological contexts. Recently demonstrated technologies based on injectable, microscale inorganic light-emitting diodes (μ-ILEDs) with wireless control and power delivery strategies offer important functionality in such experiments, by eliminating the external tethers associated with traditional fiber optic approaches. Existing wireless μ-ILED embodiments allow, however, illumination only at a single targeted region of the brain with a single optical wavelength and over spatial ranges of operation that are constrained by the radio frequency power transmission hardware. Here we report stretchable, multiresonance antennas and battery-free schemes for multichannel wireless operation of independently addressable, multicolor μ-ILEDs with fully implantable, miniaturized platforms. This advance, as demonstrated through in vitro and in vivo studies using thin, mechanically soft systems that separately control as many as three different μ-ILEDs, relies on specially designed stretchable antennas in which parallel capacitive coupling circuits yield several independent, well-separated operating frequencies, as verified through experimental and modeling results. When used in combination with active motion-tracking antenna arrays, these devices enable multichannel optogenetic research on complex behavioral responses in groups of animals over large areas at low levels of radio frequency power (<1 W). Studies of the regions of the brain that are involved in sleep arousal (locus coeruleus) and preference/aversion (nucleus accumbens) demonstrate the unique capabilities of these technologies. |
GPflow: A Gaussian Process Library using TensorFlow | GPflow is a Gaussian process library that uses TensorFlow for its core computations and Python for its front end. The distinguishing features of GPflow are that it uses variational inference as the primary approximation method, provides concise code through the use of automatic differentiation, has been engineered with a particular emphasis on software testing and is able to exploit GPU hardware. 1. GPflow and TensorFlow are available as open source software under the Apache 2.0 license. c ©2017 Alexander G. de G. Matthews, Mark van der Wilk, Tom Nickson, Keisuke Fujii, Alexis Boukouvalas, Pablo León-Villagrá, Zoubin Ghahramani, and James Hensman. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v18/16-537.html. |
A Survey of Procedural Techniques for City Generation | The computer game industry requires a skilled workforce and this combined with the complexity of modern games, means that production costs are extremely high. One of the most time consuming aspects is the creation of game geometry, the virtual world which the players inhabit. Procedural techniques have been used within computer graphics to create natural textures, simulate special effects and generate complex natural models including trees and waterfalls. It is these procedural techniques that we intend to harness to generate geometry and textures suitable for a game situated in an urban environment. Procedural techniques can provide many benefits for computer graphics applications when the correct algorithm is used. An overview of several commonly used procedural techniques including fractals, L-systems, Perlin noise, tiling systems and cellular basis is provided. The function of each technique and the resulting output they create are discussed to better understand their characteristics, benefits and relevance to the city generation problem. City generation is the creation of an urban area which necessitates the creation of buildings, situated along streets and arranged in appropriate patterns. Some research has already taken place into recreating road network patterns and generating buildings that can vary in function and architectural style. We will study the main body of existing research into procedural city generation and provide an overview of their implementations and a critique of their functionality and results. Finally we present areas in which further research into the generation of cities is required and outline our research goals for city generation. |
Deep Binary Reconstruction for Cross-Modal Hashing | With the increasing demand of massive multimodal data storage and organization, cross-modal retrieval based on hashing technique has drawn much attention nowadays. It takes the binary codes of one modality as the query to retrieve the relevant hashing codes of another modality. However, the existing binary constraint makes it difficult to find the optimal cross-modal hashing function. Most approaches choose to relax the constraint and perform thresholding strategy on the real-value representation instead of directly solving the original objective. In this paper, we first provide a concrete analysis about the effectiveness of multimodal networks in preserving the inter- and intra-modal consistency. Based on the analysis, we provide a so-called Deep Binary Reconstruction (DBRC) network that can directly learn the binary hashing codes in an unsupervised fashion. The superiority comes from a proposed simple but efficient activation function, named as Adaptive Tanh (ATanh). The ATanh function can adaptively learn the binary codes and be trained via back-propagation. Extensive experiments on three benchmark datasets demonstrate that DBRC outperforms several state-of-the-art methods in both image2text and text2image retrieval task. |
Comparison of the clinical history of symptomatic isolated muscular calf vein thrombosis versus deep calf vein thrombosis. | BACKGROUND
Half of all lower limb deep vein thromboses (DVT) are distal DVT that are equally distributed between muscular calf vein thromboses (MCVT) and deep calf vein thromboses (DCVT). Despite their high prevalence, MCVT and DCVT have never been compared so far, which prevents possible modulation of distal DVT management according to the kind of distal DVT (MCVT or DCVT).
METHODS
Using data from the French, multicenter, prospective observational OPTimisation de l'Interrogatoire dans l'évaluation du risque throMbo-Embolique Veineux (OPTIMEV) study, we compared the clinical presentation and risk factors of 268 symptomatic isolated DCVT and 457 symptomatic isolated MCVT and the 3-month outcomes of the 222 DCVT and 390 MCVT that were followed-up.
RESULTS
During the entire follow-up, 86.5% of DCVT patients and 76.7% of MCVT patients were treated with anticoagulant drugs (P = .003). MCVT was significantly more associated with localized pain than DCVT (30.4% vs 22.4%, P = .02) and less associated with swelling (47.9% vs 62.7%, P < .001). MCVT and DCVT patients exhibited the same risk factors profile, except that recent surgery was slightly more associated with DCVT (odds ratio, 1.70%; confidence interval, 1.06-2.75), and had equivalent comorbidities as evaluated by the Charlson index. At 3 months, no statistically significant difference was noted between MCVT and DCVT in death (3.8% vs 4.1%), venous thromboembolism recurrence (1.5% vs 1.4%), and major bleeding (0% vs 0.5%).
CONCLUSION
Isolated symptomatic MCVT and DCVT exhibit different clinical symptoms at presentation but affect the same patient population. Under anticoagulant treatment and in the short-term, isolated distal DVT constitutes a homogeneous entity. Therapeutic trials are needed to determine a consensual mode of care of MCVT and DCVT. |
Machine Learning Meets Databases | Machine Learning has become highly popular due to several success stories in data-driven applications. Prominent examples include object detection in images, speech recognition, and text translation. According to Gartner’s 2016 Hype Cycle for Emerging Technologies, Machine Learning is currently at its peak of inflated expectations, with several other application domains trying to exploit the use of Machine Learning technology. Since data-driven applications are a fundamental cornerstone of the database community as well, it becomes natural to ask how these fields relate to each other. In this article, we will therefore provide a brief introduction to the field of Machine Learning, we will discuss its interplay with other fields such as Data Mining and Databases, and we provide an overview of recent data management systems integrating Machine Learning functionality. |
Effectiveness of a nurse-based outreach program for identifying and treating psychiatric illness in the elderly. | CONTEXT
Elderly persons with psychiatric disorders are less likely than younger adults to be diagnosed as having a mental disorder and receive needed mental health treatment. Lack of access to care is 1 possible cause of this disparity.
OBJECTIVE
To determine whether a nurse-based mobile outreach program to seriously mentally ill elderly persons is more effective than usual care in diminishing levels of depression, psychiatric symptoms, and undesirable moves (eg, nursing home placement, eviction, board and care placement).
DESIGN
Prospective randomized trial conducted between March 1993 and April 1996 to assess the effectiveness of the Psychogeriatric Assessment and Treatment in City Housing (PATCH) program.
SETTING
Six urban public housing sites for elderly persons in Baltimore, Md.
PARTICIPANTS
A total of 945 (83%) of 1195 residents in the 6 sites underwent screening for psychiatric illness. Among those screened, 342 screened positive and 603 screened negative. All screen-positive subjects aged 60 years and older (n=310) and a 10% random sample of screen-negative subjects aged 60 years and older (n=61) were selected for a structured psychiatric interview. Eleven subjects moved or died; 245 (82%) of those who screened positive and 53 (88%) of those who screened negative were evaluated to determine who had a psychiatric disorder. Data were weighted to estimate the prevalence of psychiatric disorders at the 6 sites.
INTERVENTION
Among the 6 sites, residents in 3 buildings were randomized to receive the PATCH model intervention, which included educating building staff to be case finders, performing assessment in residents' apartments, and providing care when indicated; and residents in the remaining 3 buildings were randomized to receive usual care (comparison group).
MAIN OUTCOME MEASURES
Number of undesirable moves and scores on the Montgomery-Asberg Depression Rating Scale (MADRS), a measure of depressive symptoms, and the Brief Psychiatric Rating Scale (BPRS), a measure of psychiatric symptoms and behavioral disorder, in intervention vs comparison sites.
RESULTS
Based on weighted data, at 26 months of follow-up, psychiatric cases at the intervention sites had significantly lower (F(1)=31.18; P<.001) MADRS scores (9.1 vs 15.2) and significantly lower (F(1)=17.35; P<.001) BPRS scores (27.4 vs 33.9) than those at the nontreatment comparison sites. There was no significant difference between the groups in undesirable moves (relative risk, 0.97; 95% confidence interval, 0. 44-2.17).
CONCLUSIONS
These results indicate that the PATCH intervention was more effective than usual care in reducing psychiatric symptoms in persons with psychiatric disorders and those with elevated levels of psychiatric symptoms. JAMA. 2000;283:2802-2809 |
Adaptive dynamic surface control of nonlinear systems with unknown dead zone in pure feedback form | In this paper, adaptive dynamic surface control (DSC) is developed for a class of pure-feedback nonlinear systems with unknown dead zone and perturbed uncertainties using neural networks. The explosion of complexity in traditional backstepping design is avoided by utilizing dynamic surface control and introducing integral-type Lyapunov function. It is proved that the proposed design method is able to guarantee semi-global uniform ultimate boundedness of all signals in the closed-loop system, with arbitrary small tracking error by appropriately choosing design constants. Simulation results demonstrate the effectiveness of the proposed approach. c © 2008 Elsevier Ltd. All rights reserved. |
Grief and trauma intervention for children after disaster: exploring coping skills versus trauma narration. | This study evaluated the differential effects of the Grief and Trauma Intervention (GTI) with coping skills and trauma narrative processing (CN) and coping skills only (C). Seventy African American children (6-12 years old) were randomly assigned to GTI-CN or GTI-C. Both treatments consisted of a manualized 11-session intervention and a parent meeting. Measures of trauma exposure, posttraumatic stress symptoms, depression, traumatic grief, global distress, social support, and parent reported behavioral problems were administered at pre, post, 3 and 12 months post intervention. In general, children in both treatment groups demonstrated significant improvements in distress related symptoms and social support, which, with the exception of externalizing symptoms for GTI-C, were maintained up to 12 months post intervention. Results suggest that building coping skills without the structured trauma narrative may be a viable intervention to achieve symptom relief in children experiencing trauma-related distress. However, it may be that highly distressed children experience more symptom relief with coping skills plus narrative processing than with coping skills alone. More research on the differential effects of coping skills and trauma narration on child distress and adaptive functioning outcomes is needed. |
Distance-Preserving Subgraphs of Interval Graphs | We consider the problem of finding small distance-preserving subgraphs of undirected, unweighted interval graphs with k terminal vertices. We prove the following results. 1. Finding an optimal distance-preserving subgraph is NP-hard for general graphs. 2. Every interval graph admits a subgraph with O(k) branching vertices that approximates pairwise terminal distances up to an additive term of +1. 3. There exists an interval graph Gint for which the +1 approximation is necessary to obtain the O(k) upper bound on the number of branching vertices. In particular, any distance-preserving subgraph of Gint has Ω(k log k) branching vertices. 4. Every interval graph admits a distance-preserving subgraph with O(k log k) branching vertices, i.e. the Ω(k log k) lower bound for interval graphs is tight. 5. There exists an interval graph such that every optimal distance-preserving subgraph of it has O(k) branching vertices and Ω(k log k) branching edges, thereby providing a separation between branching vertices and branching edges. The O(k) bound for distance-approximating subgraphs follows from a näıve analysis of shortest paths in interval graphs. Gint is constructed using bit-reversal permutation matrices. The O(k log k) bound for distance-preserving subgraphs uses a divide-and-conquer approach. Finally, the separation between branching vertices and branching edges employs Hansel’s lemma [Han64] for graph covering. |
Dodonaea viscosa (L.) Leaves extract as acid Corrosion inhibitor for mild Steel - A Green approach | There is a growing trend to utilize plant extracts and pharmaceutical compounds as corrosion inhibitors. The inhibitive performance of extract of Dodonaea viscosa Leaves (DVLE) on the corrosion of mild steel in 1M HCl and 0.5M H2SO4 were studied using mass loss and electro chemical measurements. Characterization of DVLE was carried out using GCMS and FT-IR spectroscopy. Results confirmed that the extract of Dodonaea viscosa Leaves (DVLE) acts as an effective corrosion inhibitor in the acid environment. The inhibition efficiency increased with increase in concentration of the inhibitor and decreased with temperature. Thermodynamic parameters revealed that the inhibition is through spontaneous adsorption of inhibitors onto the metal surface. Potentiodynamic polarization and electrochemical impedance studies confirmed that the system follows mixed mode of inhibition. Adsorbed film of inhibitor at metal /solution interface has been confirmed using reflectance Fourier transform infrared spectroscopy. Surface analysis by UV, FTIR and SEM confirmed the formation of protective layer on the mild steel (MS) surface. Efforts were also taken to propose a suitable mechanism for the inhibition. |
What happens to contraceptive use after injectables are introduced? An analysis of 13 countries. | CONTEXT
Although the introduction of a new method is generally hailed as a boon to contraceptive prevalence, uptake of new methods can reduce the use of existing methods. It is important to examine changing patterns of contraceptive use and method mix after the introduction of new methods.
METHODS
Demographic and Health Survey data from 13 countries were used to analyze changes in method use and method mix after the introduction of the injectable in the early 1990s. Subgroup analyses were conducted among married women who reported wanting more children, but not in the next two years (spacers), and those who reported wanting no more children (limiters).
RESULTS
Modern method use and injectable use rose for each study country. Increases in modern method use exceeded those in injectable use in all but three countries. Injectable use rose among spacers, as well as among limiters of all ages, particularly those younger than 35. In general, the increase in injectable use was partially offset by declines in use of other methods, especially long-acting or permanent methods.
CONCLUSION
Family planning programs could face higher costs and women could experience more unintended pregnancies if limiters use injectables for long periods, rather than changing to longer acting and permanent methods, which provide greater contraceptive efficacy at lower cost, when they are sure they want no more children. |
Application of a liquid chromatography-tandem mass spectrometry (LC/MS/MS) method to the pharmacokinetics of ON01910 in brain tumor-bearing mice. | ON01910 is a small molecular weight benzyl styryl sulfone currently under investigation as a novel anticancer agent. The purpose of the investigation was to develop a sensitive and reproducible liquid chromatography-tandem mass spectrometry (LC/MS/MS) method to quantitate levels of ON01910 in small amounts of five biological matrices; mouse plasma, feces, urine, normal brain and brain tumor. For all matrices, protein precipitation sample preparation was used that led to linear calibration curves with coefficients of determination greater than 0.99. The lower limit of quantitation (LLOQ) for all matrices was 5 ng/ml except that for mouse urine which was 10 ng/ml. The calibration standard curves were reproducible for all matrices with inter- and intra-day variability in precision and accuracy being less than 15% at all quality control concentrations except for the LLOQ in mouse plasma for which the accuracy was within 17%. The assay was successfully applied to characterize the systemic pharmacokinetics of ON01910 as well as its disposition in brain and brain tumor in mice. ON01910 exhibited a clearance of 3.61±0.85 l/h/kg and a half life of 8.66±3.30 h at 50 mg/kg dose given I.V. |
Network Externalities , Competition , and Compatibility | Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. |
Vision based distance measurement system using single laser pointer design for underwater vehicle | As part of a continuous research and development of underwater robotics technology at ITB, a visionbased distance measurement system for an Unmanned Underwater vehicle (UUV) has been designed. The proposed system can be used to predict horizontal distance between underwater vehicle and wall in front of vehicle. At the same time, it can be used to predict vertical distance between vehicle and the surface below it as well. A camera and a single laser pointer are used to obtain data needed by our algorithm. The vision-based navigation consists of two main processes which are the detection of a laser spot using image processing and the calculation of the distance based on laser spot position on the image. |
An Individual’s Connection to Nature Can Affect Perceived Restorativeness of Natural Environments. Some Observations about Biophilia | This study investigates the relationship between the level to which a person feels connected to Nature and that person's ability to perceive the restorative value of a natural environment. We assume that perceived restorativeness may depend on an individual's connection to Nature and this relationship may also vary with the biophilic quality of the environment, i.e., the functional and aesthetic value of the natural environment which presumably gave an evolutionary advantage to our species. To this end, the level of connection to Nature and the perceived restorativeness of the environment were assessed in individuals visiting three parks characterized by their high level of "naturalness" and high or low biophilic quality. The results show that the perceived level of restorativeness is associated with the sense of connection to Nature, as well as the biophilic quality of the environment: individuals with different degrees of connection to Nature seek settings with different degrees of restorativeness and biophilic quality. This means that perceived restorativeness can also depend on an individual's "inclination" towards Nature. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.